Test Report: Docker_Windows 22141

                    
                      2191194101c4a9ddc7fa6949616ce2e0ec39dec5:2025-12-16:42801
                    
                

Test fail (34/427)

Order failed test Duration
67 TestErrorSpam/setup 46.82
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 519.76
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 373.1
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 53.66
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 53.84
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 3.25
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 739.29
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 53.99
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 20.2
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 4.11
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 122.38
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 242.82
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 23.95
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 53.98
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell 3.1
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.1
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.5
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.48
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.51
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.52
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.5
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 20.18
360 TestKubernetesUpgrade 833.55
411 TestStartStop/group/no-preload/serial/FirstStart 532.8
435 TestStartStop/group/newest-cni/serial/FirstStart 520.13
453 TestStartStop/group/no-preload/serial/DeployApp 5.78
454 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 114.5
458 TestStartStop/group/no-preload/serial/SecondStart 379.18
468 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 99.73
483 TestStartStop/group/newest-cni/serial/SecondStart 381.88
497 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.94
511 TestStartStop/group/newest-cni/serial/Pause 12.22
512 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 225.28
x
+
TestErrorSpam/setup (46.82s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-836400 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-836400 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 --driver=docker: (46.816313s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-836400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=22141
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-836400" primary control-plane node in "nospam-836400" cluster
* Pulling base image v0.0.48-1765661130-22141 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-836400" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (46.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (519.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-002200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0
E1216 04:47:01.792723   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:49:55.666702   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:49:55.673699   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:49:55.685974   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:49:55.707858   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:49:55.750288   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:49:55.831883   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:49:55.993103   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:49:56.315942   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:49:56.957588   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:49:58.239321   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:50:00.801193   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:50:05.924047   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:50:16.166158   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:50:36.647787   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:51:17.610742   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:52:01.794223   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:52:39.533441   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:53:24.868735   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-002200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m37.163838s)

                                                
                                                
-- stdout --
	* [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - HTTP_PROXY=localhost:49308
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	  - HTTP_PROXY=localhost:49308
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49308 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49308 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49308 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49308 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-002200 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-002200 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.003210092s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001590635s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001590635s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-002200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 6 (582.6838ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:53:57.034748    7776 status.go:458] kubeconfig endpoint: get endpoint: "functional-002200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.03404s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-902700 image rm kicbase/echo-server:functional-902700 --alsologtostderr                                      │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ service        │ functional-902700 service hello-node --url --format={{.IP}}                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image          │ functional-902700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr     │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image save --daemon kicbase/echo-server:functional-902700 --alsologtostderr                           │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/11704.pem                                                                 │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /usr/share/ca-certificates/11704.pem                                                     │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/51391683.0                                                                │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/117042.pem                                                                │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /usr/share/ca-certificates/117042.pem                                                    │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/test/nested/copy/11704/hosts                                                        │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ update-context │ functional-902700 update-context --alsologtostderr -v=2                                                                 │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ update-context │ functional-902700 update-context --alsologtostderr -v=2                                                                 │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format short --alsologtostderr                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh pgrep buildkitd                                                                                   │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image          │ functional-902700 image build -t localhost/my-image:functional-902700 testdata\build --alsologtostderr                  │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ service        │ functional-902700 service hello-node --url                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image          │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format yaml --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format table --alsologtostderr                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format json --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ delete         │ -p functional-902700                                                                                                    │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │ 16 Dec 25 04:45 UTC │
	│ start          │ -p functional-002200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:45:19
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:45:19.311937    6744 out.go:360] Setting OutFile to fd 2000 ...
	I1216 04:45:19.353418    6744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:45:19.353418    6744 out.go:374] Setting ErrFile to fd 1884...
	I1216 04:45:19.353418    6744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:45:19.367320    6744 out.go:368] Setting JSON to false
	I1216 04:45:19.370582    6744 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1341,"bootTime":1765858978,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 04:45:19.370582    6744 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 04:45:19.374274    6744 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 04:45:19.377922    6744 notify.go:221] Checking for updates...
	I1216 04:45:19.378449    6744 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:45:19.380215    6744 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:45:19.381974    6744 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 04:45:19.385079    6744 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:45:19.388828    6744 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:45:19.391346    6744 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:45:19.508294    6744 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 04:45:19.512323    6744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:45:19.745777    6744 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-16 04:45:19.72360858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:45:19.748995    6744 out.go:179] * Using the docker driver based on user configuration
	I1216 04:45:19.752656    6744 start.go:309] selected driver: docker
	I1216 04:45:19.752656    6744 start.go:927] validating driver "docker" against <nil>
	I1216 04:45:19.752656    6744 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:45:19.838292    6744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:45:20.067499    6744 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-16 04:45:20.047756229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:45:20.068147    6744 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:45:20.068735    6744 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:45:20.071204    6744 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 04:45:20.073855    6744 cni.go:84] Creating CNI manager for ""
	I1216 04:45:20.073923    6744 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:45:20.073986    6744 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	W1216 04:45:20.074053    6744 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:49308 to docker env.
	W1216 04:45:20.074116    6744 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:49308 to docker env.
	I1216 04:45:20.074196    6744 start.go:353] cluster config:
	{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:45:20.077390    6744 out.go:179] * Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	I1216 04:45:20.080543    6744 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 04:45:20.084126    6744 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:45:20.086864    6744 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:45:20.086864    6744 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:45:20.086864    6744 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 04:45:20.086864    6744 cache.go:65] Caching tarball of preloaded images
	I1216 04:45:20.086864    6744 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 04:45:20.086864    6744 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 04:45:20.087863    6744 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json ...
	I1216 04:45:20.087863    6744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json: {Name:mk1fba2f12b79d42f8567863c02c849401f074d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:45:20.168066    6744 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 04:45:20.168066    6744 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 04:45:20.168066    6744 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:45:20.168066    6744 start.go:360] acquireMachinesLock for functional-002200: {Name:mk1997a1c039b59133e065727b36e0e15b10eb3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:45:20.168066    6744 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-002200"
	I1216 04:45:20.168066    6744 start.go:93] Provisioning new machine with config: &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 04:45:20.168606    6744 start.go:125] createHost starting for "" (driver="docker")
	I1216 04:45:20.175309    6744 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1216 04:45:20.175917    6744 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:49308 to docker env.
	W1216 04:45:20.175917    6744 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:49308 to docker env.
	I1216 04:45:20.175917    6744 start.go:159] libmachine.API.Create for "functional-002200" (driver="docker")
	I1216 04:45:20.175917    6744 client.go:173] LocalClient.Create starting
	I1216 04:45:20.175917    6744 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 04:45:20.175917    6744 main.go:143] libmachine: Decoding PEM data...
	I1216 04:45:20.175917    6744 main.go:143] libmachine: Parsing certificate...
	I1216 04:45:20.176874    6744 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 04:45:20.176874    6744 main.go:143] libmachine: Decoding PEM data...
	I1216 04:45:20.176874    6744 main.go:143] libmachine: Parsing certificate...
	I1216 04:45:20.182884    6744 cli_runner.go:164] Run: docker network inspect functional-002200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 04:45:20.245870    6744 cli_runner.go:211] docker network inspect functional-002200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 04:45:20.249873    6744 network_create.go:284] running [docker network inspect functional-002200] to gather additional debugging logs...
	I1216 04:45:20.249873    6744 cli_runner.go:164] Run: docker network inspect functional-002200
	W1216 04:45:20.300388    6744 cli_runner.go:211] docker network inspect functional-002200 returned with exit code 1
	I1216 04:45:20.300388    6744 network_create.go:287] error running [docker network inspect functional-002200]: docker network inspect functional-002200: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-002200 not found
	I1216 04:45:20.300388    6744 network_create.go:289] output of [docker network inspect functional-002200]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-002200 not found
	
	** /stderr **
	I1216 04:45:20.304395    6744 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 04:45:20.368733    6744 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001818150}
	I1216 04:45:20.368733    6744 network_create.go:124] attempt to create docker network functional-002200 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 04:45:20.371906    6744 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-002200 functional-002200
	I1216 04:45:20.524021    6744 network_create.go:108] docker network functional-002200 192.168.49.0/24 created
	I1216 04:45:20.524021    6744 kic.go:121] calculated static IP "192.168.49.2" for the "functional-002200" container
	I1216 04:45:20.532321    6744 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 04:45:20.589111    6744 cli_runner.go:164] Run: docker volume create functional-002200 --label name.minikube.sigs.k8s.io=functional-002200 --label created_by.minikube.sigs.k8s.io=true
	I1216 04:45:20.653912    6744 oci.go:103] Successfully created a docker volume functional-002200
	I1216 04:45:20.657261    6744 cli_runner.go:164] Run: docker run --rm --name functional-002200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-002200 --entrypoint /usr/bin/test -v functional-002200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 04:45:22.073644    6744 cli_runner.go:217] Completed: docker run --rm --name functional-002200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-002200 --entrypoint /usr/bin/test -v functional-002200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.4163764s)
	I1216 04:45:22.073644    6744 oci.go:107] Successfully prepared a docker volume functional-002200
	I1216 04:45:22.073644    6744 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:45:22.073644    6744 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 04:45:22.076570    6744 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-002200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 04:45:39.878574    6744 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-002200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (17.8019248s)
	I1216 04:45:39.878664    6744 kic.go:203] duration metric: took 17.8049401s to extract preloaded images to volume ...
	I1216 04:45:39.882984    6744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:45:40.113772    6744 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-16 04:45:40.095078244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:45:40.117792    6744 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 04:45:40.355874    6744 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-002200 --name functional-002200 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-002200 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-002200 --network functional-002200 --ip 192.168.49.2 --volume functional-002200:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 04:45:41.005138    6744 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Running}}
	I1216 04:45:41.067997    6744 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:45:41.123126    6744 cli_runner.go:164] Run: docker exec functional-002200 stat /var/lib/dpkg/alternatives/iptables
	I1216 04:45:41.226674    6744 oci.go:144] the created container "functional-002200" has a running status.
	I1216 04:45:41.226674    6744 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa...
	I1216 04:45:41.307652    6744 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 04:45:41.386930    6744 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:45:41.450785    6744 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 04:45:41.450785    6744 kic_runner.go:114] Args: [docker exec --privileged functional-002200 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 04:45:41.596981    6744 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa...
	I1216 04:45:43.741971    6744 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:45:43.805596    6744 machine.go:94] provisionDockerMachine start ...
	I1216 04:45:43.811620    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:43.894332    6744 main.go:143] libmachine: Using SSH client type: native
	I1216 04:45:43.911222    6744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:45:43.911222    6744 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:45:44.079279    6744 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:45:44.079279    6744 ubuntu.go:182] provisioning hostname "functional-002200"
	I1216 04:45:44.082905    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:44.138167    6744 main.go:143] libmachine: Using SSH client type: native
	I1216 04:45:44.138638    6744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:45:44.138638    6744 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-002200 && echo "functional-002200" | sudo tee /etc/hostname
	I1216 04:45:44.309555    6744 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:45:44.313004    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:44.370624    6744 main.go:143] libmachine: Using SSH client type: native
	I1216 04:45:44.370684    6744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:45:44.370684    6744 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-002200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-002200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-002200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:45:44.529990    6744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:45:44.529990    6744 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 04:45:44.529990    6744 ubuntu.go:190] setting up certificates
	I1216 04:45:44.529990    6744 provision.go:84] configureAuth start
	I1216 04:45:44.534610    6744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:45:44.589761    6744 provision.go:143] copyHostCerts
	I1216 04:45:44.590218    6744 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 04:45:44.590218    6744 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 04:45:44.590337    6744 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 04:45:44.591887    6744 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 04:45:44.591887    6744 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 04:45:44.592313    6744 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 04:45:44.593583    6744 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 04:45:44.593583    6744 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 04:45:44.593931    6744 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 04:45:44.594623    6744 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-002200 san=[127.0.0.1 192.168.49.2 functional-002200 localhost minikube]
	I1216 04:45:44.803358    6744 provision.go:177] copyRemoteCerts
	I1216 04:45:44.806367    6744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:45:44.809365    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:44.867314    6744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:45:44.982317    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:45:45.006835    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:45:45.028727    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1216 04:45:45.055422    6744 provision.go:87] duration metric: took 525.4294ms to configureAuth
	I1216 04:45:45.055422    6744 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:45:45.056578    6744 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:45:45.061589    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:45.117373    6744 main.go:143] libmachine: Using SSH client type: native
	I1216 04:45:45.117816    6744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:45:45.117850    6744 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 04:45:45.279082    6744 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 04:45:45.279082    6744 ubuntu.go:71] root file system type: overlay
	I1216 04:45:45.279082    6744 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 04:45:45.282575    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:45.340495    6744 main.go:143] libmachine: Using SSH client type: native
	I1216 04:45:45.341091    6744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:45:45.341175    6744 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 04:45:45.530755    6744 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 04:45:45.534946    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:45.590972    6744 main.go:143] libmachine: Using SSH client type: native
	I1216 04:45:45.591242    6744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:45:45.591242    6744 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 04:45:47.024118    6744 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 04:45:45.513864989 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 04:45:47.024118    6744 machine.go:97] duration metric: took 3.2184541s to provisionDockerMachine
	I1216 04:45:47.024118    6744 client.go:176] duration metric: took 26.8480798s to LocalClient.Create
	I1216 04:45:47.024118    6744 start.go:167] duration metric: took 26.8480798s to libmachine.API.Create "functional-002200"
	I1216 04:45:47.024118    6744 start.go:293] postStartSetup for "functional-002200" (driver="docker")
	I1216 04:45:47.024118    6744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:45:47.028818    6744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:45:47.031716    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:47.087365    6744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:45:47.220035    6744 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:45:47.227684    6744 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:45:47.227684    6744 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:45:47.227684    6744 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 04:45:47.228297    6744 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 04:45:47.228879    6744 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 04:45:47.229271    6744 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> hosts in /etc/test/nested/copy/11704
	I1216 04:45:47.233116    6744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11704
	I1216 04:45:47.245265    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 04:45:47.270636    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts --> /etc/test/nested/copy/11704/hosts (40 bytes)
	I1216 04:45:47.295692    6744 start.go:296] duration metric: took 271.5426ms for postStartSetup
	I1216 04:45:47.302105    6744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:45:47.355496    6744 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json ...
	I1216 04:45:47.361992    6744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:45:47.364118    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:47.421929    6744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:45:47.552565    6744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:45:47.560773    6744 start.go:128] duration metric: took 27.3920427s to createHost
	I1216 04:45:47.560773    6744 start.go:83] releasing machines lock for "functional-002200", held for 27.3925831s
	I1216 04:45:47.565683    6744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:45:47.620314    6744 out.go:179] * Found network options:
	I1216 04:45:47.623384    6744 out.go:179]   - HTTP_PROXY=localhost:49308
	W1216 04:45:47.625258    6744 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1216 04:45:47.627702    6744 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1216 04:45:47.631548    6744 out.go:179]   - HTTP_PROXY=localhost:49308
	I1216 04:45:47.633937    6744 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 04:45:47.638082    6744 ssh_runner.go:195] Run: cat /version.json
	I1216 04:45:47.638082    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:47.641389    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:47.695152    6744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:45:47.695152    6744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	W1216 04:45:47.808295    6744 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 04:45:47.812124    6744 ssh_runner.go:195] Run: systemctl --version
	I1216 04:45:47.828410    6744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:45:47.835883    6744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:45:47.840510    6744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:45:47.889022    6744 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 04:45:47.889022    6744 start.go:496] detecting cgroup driver to use...
	I1216 04:45:47.889139    6744 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:45:47.889248    6744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:45:47.913375    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:45:47.930787    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:45:47.943975    6744 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:45:47.948203    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:45:47.965833    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:45:47.983459    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	W1216 04:45:47.996131    6744 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 04:45:47.996176    6744 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 04:45:48.002341    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:45:48.020236    6744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:45:48.040152    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 04:45:48.061341    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:45:48.081729    6744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:45:48.100008    6744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:45:48.118722    6744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:45:48.137457    6744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:45:48.265298    6744 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:45:48.428425    6744 start.go:496] detecting cgroup driver to use...
	I1216 04:45:48.428518    6744 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:45:48.433191    6744 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 04:45:48.454716    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:45:48.477192    6744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:45:48.541112    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:45:48.561969    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:45:48.580436    6744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:45:48.603600    6744 ssh_runner.go:195] Run: which cri-dockerd
	I1216 04:45:48.615891    6744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 04:45:48.628527    6744 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 04:45:48.651242    6744 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 04:45:48.787383    6744 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 04:45:48.909620    6744 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 04:45:48.909790    6744 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 04:45:48.932232    6744 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 04:45:48.954390    6744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:45:49.087924    6744 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 04:45:49.885943    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:45:49.907255    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 04:45:49.929089    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:45:49.953671    6744 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 04:45:50.099835    6744 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 04:45:50.241609    6744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:45:50.388360    6744 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 04:45:50.412201    6744 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 04:45:50.432927    6744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:45:50.567647    6744 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 04:45:50.666142    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:45:50.684183    6744 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 04:45:50.688352    6744 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 04:45:50.696240    6744 start.go:564] Will wait 60s for crictl version
	I1216 04:45:50.702759    6744 ssh_runner.go:195] Run: which crictl
	I1216 04:45:50.713606    6744 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:45:50.755986    6744 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 04:45:50.759353    6744 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:45:50.797420    6744 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:45:50.835037    6744 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 04:45:50.838942    6744 cli_runner.go:164] Run: docker exec -t functional-002200 dig +short host.docker.internal
	I1216 04:45:50.971829    6744 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 04:45:50.977248    6744 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 04:45:50.984688    6744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:45:51.004753    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:45:51.059755    6744 kubeadm.go:884] updating cluster {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:45:51.059755    6744 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:45:51.064396    6744 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:45:51.092564    6744 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:45:51.092564    6744 docker.go:621] Images already preloaded, skipping extraction
	I1216 04:45:51.096291    6744 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:45:51.124281    6744 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:45:51.124351    6744 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:45:51.124351    6744 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1216 04:45:51.124412    6744 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-002200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:45:51.127451    6744 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 04:45:51.200590    6744 cni.go:84] Creating CNI manager for ""
	I1216 04:45:51.200590    6744 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:45:51.200590    6744 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:45:51.200590    6744 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-002200 NodeName:functional-002200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:45:51.200590    6744 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-002200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:45:51.205077    6744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:45:51.216291    6744 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:45:51.221804    6744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:45:51.235568    6744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 04:45:51.256214    6744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:45:51.275381    6744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1216 04:45:51.297051    6744 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:45:51.304905    6744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:45:51.322224    6744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:45:51.456875    6744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:45:51.477797    6744 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200 for IP: 192.168.49.2
	I1216 04:45:51.477797    6744 certs.go:195] generating shared ca certs ...
	I1216 04:45:51.477797    6744 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:45:51.478403    6744 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 04:45:51.478929    6744 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 04:45:51.479121    6744 certs.go:257] generating profile certs ...
	I1216 04:45:51.479515    6744 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key
	I1216 04:45:51.479578    6744 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.crt with IP's: []
	I1216 04:45:51.580610    6744 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.crt ...
	I1216 04:45:51.580610    6744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.crt: {Name:mk207ec2f964bd67b3171c52db4ae4b5358ca085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:45:51.581623    6744 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key ...
	I1216 04:45:51.581623    6744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key: {Name:mk6283f2883b4b898190cc1b8509822efeda7a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:45:51.581623    6744 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742
	I1216 04:45:51.582612    6744 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt.31248742 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 04:45:51.671832    6744 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt.31248742 ...
	I1216 04:45:51.671832    6744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt.31248742: {Name:mk321669699062a656ac903cb77ec2b81a53b77e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:45:51.672836    6744 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742 ...
	I1216 04:45:51.672836    6744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742: {Name:mkc7ed313bd0b1e086ab715f4a9bb91b585adff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:45:51.672836    6744 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt.31248742 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt
	I1216 04:45:51.688495    6744 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key
	I1216 04:45:51.689258    6744 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key
	I1216 04:45:51.689258    6744 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt with IP's: []
	I1216 04:45:51.787465    6744 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt ...
	I1216 04:45:51.787465    6744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt: {Name:mk0ab48f79329278bf821d3fa23cf2795ca756f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:45:51.788466    6744 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key ...
	I1216 04:45:51.788466    6744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key: {Name:mkb87c8adf8833b1ea6dd61a9800686f0f01d7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:45:51.802486    6744 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 04:45:51.803582    6744 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 04:45:51.803582    6744 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 04:45:51.803582    6744 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 04:45:51.804105    6744 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 04:45:51.804322    6744 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 04:45:51.804659    6744 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 04:45:51.805797    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:45:51.833001    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:45:51.856496    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:45:51.885746    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:45:51.909253    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:45:51.938285    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 04:45:51.964831    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:45:51.987977    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:45:52.011396    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 04:45:52.036898    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:45:52.064546    6744 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 04:45:52.090885    6744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:45:52.111816    6744 ssh_runner.go:195] Run: openssl version
	I1216 04:45:52.126675    6744 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 04:45:52.145065    6744 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 04:45:52.161869    6744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 04:45:52.170073    6744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 04:45:52.173773    6744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 04:45:52.219436    6744 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:45:52.239185    6744 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 04:45:52.256244    6744 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 04:45:52.274300    6744 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 04:45:52.290822    6744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 04:45:52.297823    6744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 04:45:52.302246    6744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 04:45:52.348143    6744 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:45:52.363135    6744 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 04:45:52.381004    6744 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:45:52.397975    6744 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:45:52.414054    6744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:45:52.421454    6744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:45:52.425398    6744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:45:52.475196    6744 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:45:52.492156    6744 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:45:52.510348    6744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:45:52.518258    6744 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:45:52.519222    6744 kubeadm.go:401] StartCluster: {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:45:52.522961    6744 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 04:45:52.552910    6744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:45:52.568632    6744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:45:52.581970    6744 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:45:52.585870    6744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:45:52.598593    6744 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:45:52.598593    6744 kubeadm.go:158] found existing configuration files:
	
	I1216 04:45:52.603734    6744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:45:52.615060    6744 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:45:52.618793    6744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:45:52.636980    6744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:45:52.651742    6744 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:45:52.655745    6744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:45:52.672716    6744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:45:52.687492    6744 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:45:52.691552    6744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:45:52.708651    6744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:45:52.719541    6744 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:45:52.723960    6744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:45:52.739303    6744 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:45:52.850187    6744 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 04:45:52.931490    6744 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:45:53.027526    6744 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:49:54.304108    6744 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 04:49:54.304108    6744 kubeadm.go:319] 
	I1216 04:49:54.304339    6744 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:49:54.307897    6744 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:49:54.307966    6744 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:49:54.307966    6744 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:49:54.307966    6744 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 04:49:54.307966    6744 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 04:49:54.308494    6744 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 04:49:54.308537    6744 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 04:49:54.308537    6744 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 04:49:54.308537    6744 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 04:49:54.308537    6744 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 04:49:54.308537    6744 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 04:49:54.308537    6744 kubeadm.go:319] CONFIG_INET: enabled
	I1216 04:49:54.309118    6744 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 04:49:54.309118    6744 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 04:49:54.309118    6744 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 04:49:54.309118    6744 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 04:49:54.309118    6744 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 04:49:54.309118    6744 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 04:49:54.309683    6744 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 04:49:54.309740    6744 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 04:49:54.309740    6744 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 04:49:54.309740    6744 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 04:49:54.309740    6744 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 04:49:54.309740    6744 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 04:49:54.309740    6744 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 04:49:54.310364    6744 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 04:49:54.310418    6744 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 04:49:54.310418    6744 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 04:49:54.310418    6744 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 04:49:54.310418    6744 kubeadm.go:319] OS: Linux
	I1216 04:49:54.310418    6744 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:49:54.310418    6744 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:49:54.310982    6744 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:49:54.311035    6744 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:49:54.311035    6744 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:49:54.311035    6744 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:49:54.311035    6744 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:49:54.311035    6744 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:49:54.311035    6744 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:49:54.311549    6744 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:49:54.311591    6744 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:49:54.311591    6744 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:49:54.311591    6744 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:49:54.313941    6744 out.go:252]   - Generating certificates and keys ...
	I1216 04:49:54.314542    6744 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:49:54.314542    6744 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:49:54.314542    6744 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:49:54.314542    6744 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:49:54.314542    6744 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:49:54.315063    6744 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:49:54.315091    6744 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:49:54.315091    6744 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-002200 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:49:54.315091    6744 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:49:54.315611    6744 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-002200 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 04:49:54.315749    6744 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:49:54.315749    6744 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:49:54.315749    6744 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:49:54.315749    6744 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:49:54.315749    6744 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:49:54.315749    6744 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:49:54.316297    6744 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:49:54.316297    6744 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:49:54.316297    6744 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:49:54.316297    6744 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:49:54.316297    6744 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:49:54.320152    6744 out.go:252]   - Booting up control plane ...
	I1216 04:49:54.320206    6744 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:49:54.320206    6744 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:49:54.320206    6744 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:49:54.320753    6744 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:49:54.320753    6744 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:49:54.320753    6744 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:49:54.321272    6744 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:49:54.321349    6744 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:49:54.321349    6744 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:49:54.321349    6744 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:49:54.321903    6744 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.003210092s
	I1216 04:49:54.321903    6744 kubeadm.go:319] 
	I1216 04:49:54.321903    6744 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:49:54.321903    6744 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:49:54.321903    6744 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:49:54.321903    6744 kubeadm.go:319] 
	I1216 04:49:54.322517    6744 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:49:54.322546    6744 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:49:54.322546    6744 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:49:54.322546    6744 kubeadm.go:319] 
	W1216 04:49:54.322546    6744 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-002200 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-002200 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.003210092s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 04:49:54.327466    6744 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 04:49:54.791606    6744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:49:54.809493    6744 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 04:49:54.814000    6744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:49:54.824970    6744 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:49:54.825015    6744 kubeadm.go:158] found existing configuration files:
	
	I1216 04:49:54.829522    6744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 04:49:54.842288    6744 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:49:54.846379    6744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:49:54.863780    6744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 04:49:54.876415    6744 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:49:54.880885    6744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:49:54.898082    6744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 04:49:54.910643    6744 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:49:54.914898    6744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:49:54.931972    6744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 04:49:54.943969    6744 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:49:54.948737    6744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:49:54.964569    6744 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 04:49:55.080562    6744 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 04:49:55.163544    6744 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 04:49:55.260303    6744 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:53:55.746006    6744 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 04:53:55.746070    6744 kubeadm.go:319] 
	I1216 04:53:55.746381    6744 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 04:53:55.752941    6744 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 04:53:55.752941    6744 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:53:55.752941    6744 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 04:53:55.752941    6744 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 04:53:55.753598    6744 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 04:53:55.753645    6744 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 04:53:55.753645    6744 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 04:53:55.753645    6744 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 04:53:55.753645    6744 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 04:53:55.753645    6744 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 04:53:55.754215    6744 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 04:53:55.754215    6744 kubeadm.go:319] CONFIG_INET: enabled
	I1216 04:53:55.754215    6744 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 04:53:55.754215    6744 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 04:53:55.754215    6744 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 04:53:55.754888    6744 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 04:53:55.755014    6744 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 04:53:55.755111    6744 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 04:53:55.755201    6744 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 04:53:55.755296    6744 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 04:53:55.755296    6744 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 04:53:55.755296    6744 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 04:53:55.755296    6744 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 04:53:55.755296    6744 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 04:53:55.755296    6744 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 04:53:55.755296    6744 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 04:53:55.755843    6744 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 04:53:55.755843    6744 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 04:53:55.755843    6744 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 04:53:55.755843    6744 kubeadm.go:319] OS: Linux
	I1216 04:53:55.755843    6744 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 04:53:55.755843    6744 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 04:53:55.755843    6744 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 04:53:55.756385    6744 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 04:53:55.756463    6744 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 04:53:55.756585    6744 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 04:53:55.756585    6744 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 04:53:55.756585    6744 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 04:53:55.756585    6744 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 04:53:55.756585    6744 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:53:55.757147    6744 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:53:55.757147    6744 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:53:55.757147    6744 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:53:55.759874    6744 out.go:252]   - Generating certificates and keys ...
	I1216 04:53:55.759874    6744 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:53:55.759874    6744 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:53:55.760478    6744 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 04:53:55.760599    6744 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 04:53:55.760599    6744 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 04:53:55.760599    6744 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 04:53:55.760599    6744 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 04:53:55.760599    6744 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 04:53:55.761119    6744 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 04:53:55.761243    6744 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 04:53:55.761243    6744 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 04:53:55.761243    6744 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:53:55.761243    6744 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:53:55.761243    6744 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:53:55.761243    6744 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:53:55.761827    6744 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:53:55.761827    6744 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:53:55.761827    6744 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:53:55.761827    6744 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:53:55.764876    6744 out.go:252]   - Booting up control plane ...
	I1216 04:53:55.764876    6744 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:53:55.764876    6744 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:53:55.764876    6744 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:53:55.764876    6744 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:53:55.764876    6744 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:53:55.765875    6744 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:53:55.765875    6744 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:53:55.765875    6744 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:53:55.765875    6744 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:53:55.765875    6744 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:53:55.765875    6744 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001590635s
	I1216 04:53:55.765875    6744 kubeadm.go:319] 
	I1216 04:53:55.765875    6744 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 04:53:55.765875    6744 kubeadm.go:319] 	- The kubelet is not running
	I1216 04:53:55.766875    6744 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 04:53:55.766875    6744 kubeadm.go:319] 
	I1216 04:53:55.766875    6744 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 04:53:55.766875    6744 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 04:53:55.766875    6744 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 04:53:55.766875    6744 kubeadm.go:319] 
	I1216 04:53:55.766875    6744 kubeadm.go:403] duration metric: took 8m3.2451082s to StartCluster
	I1216 04:53:55.766875    6744 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 04:53:55.770884    6744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 04:53:55.826145    6744 cri.go:89] found id: ""
	I1216 04:53:55.826232    6744 logs.go:282] 0 containers: []
	W1216 04:53:55.826232    6744 logs.go:284] No container was found matching "kube-apiserver"
	I1216 04:53:55.826232    6744 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 04:53:55.830891    6744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 04:53:55.872532    6744 cri.go:89] found id: ""
	I1216 04:53:55.872532    6744 logs.go:282] 0 containers: []
	W1216 04:53:55.872532    6744 logs.go:284] No container was found matching "etcd"
	I1216 04:53:55.872532    6744 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 04:53:55.877185    6744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 04:53:55.916837    6744 cri.go:89] found id: ""
	I1216 04:53:55.916837    6744 logs.go:282] 0 containers: []
	W1216 04:53:55.916837    6744 logs.go:284] No container was found matching "coredns"
	I1216 04:53:55.916910    6744 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 04:53:55.921156    6744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 04:53:55.962381    6744 cri.go:89] found id: ""
	I1216 04:53:55.962381    6744 logs.go:282] 0 containers: []
	W1216 04:53:55.962381    6744 logs.go:284] No container was found matching "kube-scheduler"
	I1216 04:53:55.962381    6744 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 04:53:55.966529    6744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 04:53:56.006014    6744 cri.go:89] found id: ""
	I1216 04:53:56.006014    6744 logs.go:282] 0 containers: []
	W1216 04:53:56.006092    6744 logs.go:284] No container was found matching "kube-proxy"
	I1216 04:53:56.006092    6744 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 04:53:56.010368    6744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 04:53:56.048501    6744 cri.go:89] found id: ""
	I1216 04:53:56.048501    6744 logs.go:282] 0 containers: []
	W1216 04:53:56.048501    6744 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 04:53:56.048501    6744 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 04:53:56.052926    6744 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 04:53:56.096292    6744 cri.go:89] found id: ""
	I1216 04:53:56.096292    6744 logs.go:282] 0 containers: []
	W1216 04:53:56.096292    6744 logs.go:284] No container was found matching "kindnet"
	I1216 04:53:56.096292    6744 logs.go:123] Gathering logs for describe nodes ...
	I1216 04:53:56.096292    6744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 04:53:56.177536    6744 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:53:56.168703    9867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:56.169633    9867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:56.171848    9867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:56.172903    9867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:56.173838    9867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 04:53:56.168703    9867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:56.169633    9867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:56.171848    9867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:56.172903    9867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:56.173838    9867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 04:53:56.177536    6744 logs.go:123] Gathering logs for Docker ...
	I1216 04:53:56.177536    6744 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 04:53:56.205590    6744 logs.go:123] Gathering logs for container status ...
	I1216 04:53:56.205590    6744 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 04:53:56.249413    6744 logs.go:123] Gathering logs for kubelet ...
	I1216 04:53:56.249413    6744 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 04:53:56.308644    6744 logs.go:123] Gathering logs for dmesg ...
	I1216 04:53:56.308644    6744 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 04:53:56.335279    6744 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001590635s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 04:53:56.335279    6744 out.go:285] * 
	W1216 04:53:56.336282    6744 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001590635s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:53:56.336282    6744 out.go:285] * 
	W1216 04:53:56.338053    6744 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 04:53:56.343185    6744 out.go:203] 
	W1216 04:53:56.344886    6744 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001590635s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 04:53:56.346115    6744 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 04:53:56.346191    6744 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 04:53:56.350838    6744 out.go:203] 
	
	
	==> Docker <==
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.765865490Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.765957297Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.765968198Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.765973698Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.765979698Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.766004100Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.766039403Z" level=info msg="Initializing buildkit"
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.864600372Z" level=info msg="Completed buildkit initialization"
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.875750883Z" level=info msg="Daemon has completed initialization"
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.876123110Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.876124410Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 04:45:49 functional-002200 dockerd[1202]: time="2025-12-16T04:45:49.876195416Z" level=info msg="API listen on [::]:2376"
	Dec 16 04:45:49 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 04:45:50 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 04:45:50 functional-002200 cri-dockerd[1496]: time="2025-12-16T04:45:50Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 04:45:50 functional-002200 cri-dockerd[1496]: time="2025-12-16T04:45:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 04:45:50 functional-002200 cri-dockerd[1496]: time="2025-12-16T04:45:50Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 04:45:50 functional-002200 cri-dockerd[1496]: time="2025-12-16T04:45:50Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 04:45:50 functional-002200 cri-dockerd[1496]: time="2025-12-16T04:45:50Z" level=info msg="Loaded network plugin cni"
	Dec 16 04:45:50 functional-002200 cri-dockerd[1496]: time="2025-12-16T04:45:50Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 04:45:50 functional-002200 cri-dockerd[1496]: time="2025-12-16T04:45:50Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 04:45:50 functional-002200 cri-dockerd[1496]: time="2025-12-16T04:45:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 04:45:50 functional-002200 cri-dockerd[1496]: time="2025-12-16T04:45:50Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 04:45:50 functional-002200 cri-dockerd[1496]: time="2025-12-16T04:45:50Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 04:45:50 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 04:53:57.993954   10032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:57.995182   10032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:57.996479   10032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:57.997458   10032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 04:53:57.998124   10032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001060] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000996] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000951] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001074] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001129] FS:  0000000000000000 GS:  0000000000000000
	[  +6.671686] CPU: 14 PID: 44306 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000782] RIP: 0033:0x7f782e208b20
	[  +0.000389] Code: Unable to access opcode bytes at RIP 0x7f782e208af6.
	[  +0.000641] RSP: 002b:00007ffd9fc0f360 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000787] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000775] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000775] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000826] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000764] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000772] FS:  0000000000000000 GS:  0000000000000000
	[  +0.771356] CPU: 10 PID: 44420 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001083] RIP: 0033:0x7fe1d6387b20
	[  +0.000563] Code: Unable to access opcode bytes at RIP 0x7fe1d6387af6.
	[  +0.000858] RSP: 002b:00007fff60566d80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000914] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001061] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001041] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001007] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000838] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001072] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 04:53:58 up 30 min,  0 user,  load average: 0.32, 0.46, 0.71
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 04:53:55 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:53:55 functional-002200 kubelet[9754]: E1216 04:53:55.151695    9754 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:53:55 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:53:55 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:53:55 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 16 04:53:55 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:53:55 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:53:55 functional-002200 kubelet[9775]: E1216 04:53:55.887111    9775 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:53:55 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:53:55 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:53:56 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 04:53:56 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:53:56 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:53:56 functional-002200 kubelet[9899]: E1216 04:53:56.645332    9899 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:53:56 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:53:56 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:53:57 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 16 04:53:57 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:53:57 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:53:57 functional-002200 kubelet[9927]: E1216 04:53:57.430731    9927 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 04:53:57 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 04:53:57 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 04:53:58 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 16 04:53:58 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 04:53:58 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 6 (566.6848ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 04:53:58.961913   13536 status.go:458] kubeconfig endpoint: get endpoint: "functional-002200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (519.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (373.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1216 04:53:59.008452   11704 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-002200 --alsologtostderr -v=8
E1216 04:54:55.667796   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:55:23.376992   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:57:01.795482   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:59:55.670581   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-002200 --alsologtostderr -v=8: exit status 80 (6m8.9193287s)

                                                
                                                
-- stdout --
	* [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:53:59.077529   10816 out.go:360] Setting OutFile to fd 1388 ...
	I1216 04:53:59.120079   10816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:53:59.120079   10816 out.go:374] Setting ErrFile to fd 1504...
	I1216 04:53:59.120079   10816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:53:59.134125   10816 out.go:368] Setting JSON to false
	I1216 04:53:59.136333   10816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1860,"bootTime":1765858978,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 04:53:59.136333   10816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 04:53:59.140588   10816 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 04:53:59.143257   10816 notify.go:221] Checking for updates...
	I1216 04:53:59.144338   10816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:53:59.146335   10816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:53:59.148852   10816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 04:53:59.153389   10816 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:53:59.155692   10816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:53:59.158810   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:53:59.158810   10816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:53:59.271386   10816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 04:53:59.275857   10816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:53:59.515409   10816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 04:53:59.497557869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:53:59.520423   10816 out.go:179] * Using the docker driver based on existing profile
	I1216 04:53:59.523406   10816 start.go:309] selected driver: docker
	I1216 04:53:59.523406   10816 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:53:59.523406   10816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:53:59.529406   10816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:53:59.757949   10816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 04:53:59.738153267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:53:59.838476   10816 cni.go:84] Creating CNI manager for ""
	I1216 04:53:59.838476   10816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:53:59.838997   10816 start.go:353] cluster config:
	{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:53:59.842569   10816 out.go:179] * Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	I1216 04:53:59.844586   10816 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 04:53:59.847541   10816 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:53:59.850024   10816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:53:59.850024   10816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:53:59.850184   10816 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 04:53:59.850253   10816 cache.go:65] Caching tarball of preloaded images
	I1216 04:53:59.850408   10816 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 04:53:59.850408   10816 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 04:53:59.850408   10816 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json ...
	I1216 04:53:59.925943   10816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 04:53:59.925943   10816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 04:53:59.926465   10816 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:53:59.926540   10816 start.go:360] acquireMachinesLock for functional-002200: {Name:mk1997a1c039b59133e065727b36e0e15b10eb3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:53:59.926717   10816 start.go:364] duration metric: took 124.8µs to acquireMachinesLock for "functional-002200"
	I1216 04:53:59.926803   10816 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:53:59.926803   10816 fix.go:54] fixHost starting: 
	I1216 04:53:59.933877   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:53:59.985861   10816 fix.go:112] recreateIfNeeded on functional-002200: state=Running err=<nil>
	W1216 04:53:59.986777   10816 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:53:59.990712   10816 out.go:252] * Updating the running docker "functional-002200" container ...
	I1216 04:53:59.990712   10816 machine.go:94] provisionDockerMachine start ...
	I1216 04:53:59.994611   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.050133   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.050702   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.050702   10816 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:54:00.224414   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:54:00.224414   10816 ubuntu.go:182] provisioning hostname "functional-002200"
	I1216 04:54:00.228183   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.284942   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.285440   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.285501   10816 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-002200 && echo "functional-002200" | sudo tee /etc/hostname
	I1216 04:54:00.466400   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:54:00.469396   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.520394   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.520394   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.521395   10816 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-002200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-002200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-002200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:54:00.690074   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:54:00.690074   10816 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 04:54:00.690074   10816 ubuntu.go:190] setting up certificates
	I1216 04:54:00.690074   10816 provision.go:84] configureAuth start
	I1216 04:54:00.694148   10816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:54:00.751989   10816 provision.go:143] copyHostCerts
	I1216 04:54:00.752186   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1216 04:54:00.752528   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 04:54:00.752557   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 04:54:00.752557   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 04:54:00.753298   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1216 04:54:00.753298   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 04:54:00.753298   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 04:54:00.754021   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 04:54:00.754554   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1216 04:54:00.754554   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 04:54:00.754554   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 04:54:00.755135   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 04:54:00.755694   10816 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-002200 san=[127.0.0.1 192.168.49.2 functional-002200 localhost minikube]
	I1216 04:54:00.834817   10816 provision.go:177] copyRemoteCerts
	I1216 04:54:00.838808   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:54:00.841808   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.896045   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:01.027660   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1216 04:54:01.027660   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:54:01.054957   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1216 04:54:01.054957   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:54:01.077598   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1216 04:54:01.077598   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:54:01.104237   10816 provision.go:87] duration metric: took 414.1604ms to configureAuth
	I1216 04:54:01.104237   10816 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:54:01.105157   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:54:01.110636   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.168864   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.169525   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.169551   10816 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 04:54:01.355861   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 04:54:01.355861   10816 ubuntu.go:71] root file system type: overlay
	I1216 04:54:01.355861   10816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 04:54:01.359632   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.417983   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.418643   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.418643   10816 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 04:54:01.607477   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 04:54:01.611072   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.665669   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.666241   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.666241   10816 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 04:54:01.838018   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:54:01.838065   10816 machine.go:97] duration metric: took 1.8473421s to provisionDockerMachine
	I1216 04:54:01.838112   10816 start.go:293] postStartSetup for "functional-002200" (driver="docker")
	I1216 04:54:01.838112   10816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:54:01.842730   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:54:01.845927   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.899710   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.030948   10816 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:54:02.037585   10816 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 04:54:02.037585   10816 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION_ID="12"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 04:54:02.037585   10816 command_runner.go:130] > ID=debian
	I1216 04:54:02.037585   10816 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 04:54:02.037585   10816 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 04:54:02.037585   10816 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 04:54:02.037585   10816 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:54:02.037585   10816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:54:02.037585   10816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 04:54:02.037585   10816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 04:54:02.038695   10816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 04:54:02.038739   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> /etc/ssl/certs/117042.pem
	I1216 04:54:02.039358   10816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> hosts in /etc/test/nested/copy/11704
	I1216 04:54:02.039390   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> /etc/test/nested/copy/11704/hosts
	I1216 04:54:02.043645   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11704
	I1216 04:54:02.054687   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 04:54:02.077250   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts --> /etc/test/nested/copy/11704/hosts (40 bytes)
	I1216 04:54:02.106199   10816 start.go:296] duration metric: took 268.0858ms for postStartSetup
	I1216 04:54:02.110518   10816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:54:02.114167   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.171516   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.294935   10816 command_runner.go:130] > 1%
	I1216 04:54:02.299449   10816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:54:02.309560   10816 command_runner.go:130] > 950G
	I1216 04:54:02.309560   10816 fix.go:56] duration metric: took 2.3827424s for fixHost
	I1216 04:54:02.309560   10816 start.go:83] releasing machines lock for "functional-002200", held for 2.3828036s
	I1216 04:54:02.313570   10816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:54:02.366171   10816 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 04:54:02.371688   10816 ssh_runner.go:195] Run: cat /version.json
	I1216 04:54:02.371747   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.373884   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.425495   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.428440   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.530908   10816 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1216 04:54:02.530908   10816 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 04:54:02.552908   10816 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1216 04:54:02.557959   10816 ssh_runner.go:195] Run: systemctl --version
	I1216 04:54:02.566291   10816 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 04:54:02.566291   10816 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 04:54:02.571531   10816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 04:54:02.582535   10816 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 04:54:02.582535   10816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:54:02.587977   10816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:54:02.599631   10816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:54:02.599684   10816 start.go:496] detecting cgroup driver to use...
	I1216 04:54:02.599733   10816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:54:02.599952   10816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:54:02.620915   10816 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1216 04:54:02.625275   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:54:02.642513   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:54:02.658404   10816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:54:02.664249   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:54:02.683612   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:54:02.703566   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:54:02.723114   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:54:02.741121   10816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:54:02.760533   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	W1216 04:54:02.771378   10816 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 04:54:02.771378   10816 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 04:54:02.781609   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:54:02.800465   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:54:02.819380   10816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:54:02.832241   10816 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 04:54:02.836457   10816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:54:02.854943   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:02.994394   10816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:54:03.139472   10816 start.go:496] detecting cgroup driver to use...
	I1216 04:54:03.139472   10816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:54:03.143391   10816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 04:54:03.162559   10816 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1216 04:54:03.162559   10816 command_runner.go:130] > [Unit]
	I1216 04:54:03.162559   10816 command_runner.go:130] > Description=Docker Application Container Engine
	I1216 04:54:03.162647   10816 command_runner.go:130] > Documentation=https://docs.docker.com
	I1216 04:54:03.162647   10816 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1216 04:54:03.162647   10816 command_runner.go:130] > Wants=network-online.target containerd.service
	I1216 04:54:03.162647   10816 command_runner.go:130] > Requires=docker.socket
	I1216 04:54:03.162689   10816 command_runner.go:130] > StartLimitBurst=3
	I1216 04:54:03.162689   10816 command_runner.go:130] > StartLimitIntervalSec=60
	I1216 04:54:03.162734   10816 command_runner.go:130] > [Service]
	I1216 04:54:03.162734   10816 command_runner.go:130] > Type=notify
	I1216 04:54:03.162734   10816 command_runner.go:130] > Restart=always
	I1216 04:54:03.162734   10816 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1216 04:54:03.162807   10816 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1216 04:54:03.162828   10816 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1216 04:54:03.162828   10816 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1216 04:54:03.162828   10816 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1216 04:54:03.162900   10816 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1216 04:54:03.162917   10816 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1216 04:54:03.162917   10816 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1216 04:54:03.162917   10816 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1216 04:54:03.162917   10816 command_runner.go:130] > ExecStart=
	I1216 04:54:03.162980   10816 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1216 04:54:03.162980   10816 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1216 04:54:03.163008   10816 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1216 04:54:03.163008   10816 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitNOFILE=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitNPROC=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitCORE=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1216 04:54:03.163065   10816 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1216 04:54:03.163065   10816 command_runner.go:130] > TasksMax=infinity
	I1216 04:54:03.163065   10816 command_runner.go:130] > TimeoutStartSec=0
	I1216 04:54:03.163065   10816 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1216 04:54:03.163112   10816 command_runner.go:130] > Delegate=yes
	I1216 04:54:03.163112   10816 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1216 04:54:03.163112   10816 command_runner.go:130] > KillMode=process
	I1216 04:54:03.163112   10816 command_runner.go:130] > OOMScoreAdjust=-500
	I1216 04:54:03.163112   10816 command_runner.go:130] > [Install]
	I1216 04:54:03.163112   10816 command_runner.go:130] > WantedBy=multi-user.target
	I1216 04:54:03.167400   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:54:03.188934   10816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:54:03.279029   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:54:03.300208   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:54:03.316692   10816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:54:03.338834   10816 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1216 04:54:03.343609   10816 ssh_runner.go:195] Run: which cri-dockerd
	I1216 04:54:03.350066   10816 command_runner.go:130] > /usr/bin/cri-dockerd
	I1216 04:54:03.355212   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 04:54:03.369229   10816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 04:54:03.392646   10816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 04:54:03.524584   10816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 04:54:03.661458   10816 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 04:54:03.661598   10816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 04:54:03.685520   10816 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 04:54:03.708589   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:03.845683   10816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 04:54:04.645791   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:54:04.667182   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 04:54:04.690401   10816 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 04:54:04.718176   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:54:04.738992   10816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 04:54:04.903819   10816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 04:54:05.034592   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:05.166883   10816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 04:54:05.190738   10816 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 04:54:05.211273   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:05.344748   10816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 04:54:05.446097   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:54:05.463790   10816 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 04:54:05.471347   10816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 04:54:05.478565   10816 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1216 04:54:05.478565   10816 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 04:54:05.478565   10816 command_runner.go:130] > Device: 0,112	Inode: 1751        Links: 1
	I1216 04:54:05.478565   10816 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1216 04:54:05.478565   10816 command_runner.go:130] > Access: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] > Modify: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] > Change: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] >  Birth: -
	I1216 04:54:05.478565   10816 start.go:564] Will wait 60s for crictl version
	I1216 04:54:05.482816   10816 ssh_runner.go:195] Run: which crictl
	I1216 04:54:05.491459   10816 command_runner.go:130] > /usr/local/bin/crictl
	I1216 04:54:05.496033   10816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:54:05.533167   10816 command_runner.go:130] > Version:  0.1.0
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeName:  docker
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 04:54:05.533167   10816 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 04:54:05.536709   10816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:54:05.572362   10816 command_runner.go:130] > 29.1.3
	I1216 04:54:05.576856   10816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:54:05.612780   10816 command_runner.go:130] > 29.1.3
	I1216 04:54:05.616153   10816 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 04:54:05.619706   10816 cli_runner.go:164] Run: docker exec -t functional-002200 dig +short host.docker.internal
	I1216 04:54:05.740410   10816 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 04:54:05.744411   10816 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 04:54:05.751410   10816 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1216 04:54:05.754417   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:05.810199   10816 kubeadm.go:884] updating cluster {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:54:05.810199   10816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:54:05.814984   10816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1216 04:54:05.850393   10816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:05.850393   10816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:54:05.850393   10816 docker.go:621] Images already preloaded, skipping extraction
	I1216 04:54:05.852935   10816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1216 04:54:05.887286   10816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:05.887286   10816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:54:05.887286   10816 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:54:05.887286   10816 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1216 04:54:05.887286   10816 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-002200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:54:05.890789   10816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 04:54:05.960191   10816 command_runner.go:130] > cgroupfs
	I1216 04:54:05.960191   10816 cni.go:84] Creating CNI manager for ""
	I1216 04:54:05.960191   10816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:54:05.960191   10816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:54:05.960723   10816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-002200 NodeName:functional-002200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:54:05.960947   10816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-002200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:54:05.964962   10816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubeadm
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubectl
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubelet
	I1216 04:54:05.978770   10816 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:54:05.983615   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:54:05.994290   10816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 04:54:06.017936   10816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:54:06.036718   10816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1216 04:54:06.060901   10816 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:54:06.072426   10816 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 04:54:06.077308   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:06.213746   10816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:54:06.308797   10816 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200 for IP: 192.168.49.2
	I1216 04:54:06.308797   10816 certs.go:195] generating shared ca certs ...
	I1216 04:54:06.308797   10816 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:06.310511   10816 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 04:54:06.310511   10816 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 04:54:06.310511   10816 certs.go:257] generating profile certs ...
	I1216 04:54:06.311535   10816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key
	I1216 04:54:06.311853   10816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742
	I1216 04:54:06.312156   10816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key
	I1216 04:54:06.312187   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 04:54:06.312277   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1216 04:54:06.312360   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 04:54:06.312444   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 04:54:06.312580   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 04:54:06.312673   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 04:54:06.312777   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 04:54:06.312890   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 04:54:06.313261   10816 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 04:54:06.313921   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 04:54:06.314135   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 04:54:06.314531   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 04:54:06.314719   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.314759   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.314759   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem -> /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.315394   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:54:06.342547   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:54:06.368689   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:54:06.393638   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:54:06.418640   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:54:06.453759   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 04:54:06.476256   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:54:06.500532   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:54:06.524928   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 04:54:06.552508   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:54:06.575232   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 04:54:06.598894   10816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:54:06.620996   10816 ssh_runner.go:195] Run: openssl version
	I1216 04:54:06.631676   10816 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 04:54:06.636278   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.653246   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:54:06.670292   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.677576   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.677653   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.681684   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.724946   10816 command_runner.go:130] > b5213941
	I1216 04:54:06.729462   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:54:06.747149   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.764470   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 04:54:06.780610   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.787541   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.787541   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.791611   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.834505   10816 command_runner.go:130] > 51391683
	I1216 04:54:06.839668   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:54:06.856437   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.871735   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 04:54:06.888873   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.895775   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.895828   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.900176   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.943961   10816 command_runner.go:130] > 3ec20f2e
	I1216 04:54:06.948620   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:54:06.964812   10816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:54:06.978768   10816 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:54:06.978768   10816 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 04:54:06.978768   10816 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1216 04:54:06.978768   10816 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:54:06.978768   10816 command_runner.go:130] > Access: 2025-12-16 04:49:55.262290705 +0000
	I1216 04:54:06.978768   10816 command_runner.go:130] > Modify: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.978768   10816 command_runner.go:130] > Change: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.978868   10816 command_runner.go:130] >  Birth: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.982552   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:54:07.026352   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.030610   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:54:07.075026   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.079065   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:54:07.126638   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.131687   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:54:07.174667   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.179083   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:54:07.222822   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.227385   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:54:07.271975   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.271975   10816 kubeadm.go:401] StartCluster: {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:54:07.276330   10816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 04:54:07.308756   10816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:54:07.320226   10816 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 04:54:07.320260   10816 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 04:54:07.320260   10816 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 04:54:07.320341   10816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:54:07.320341   10816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:54:07.325132   10816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:54:07.336047   10816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:54:07.339740   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.398431   10816 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-002200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.399021   10816 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-002200" cluster setting kubeconfig missing "functional-002200" context setting]
	I1216 04:54:07.399534   10816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.418099   10816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.418579   10816 kapi.go:59] client config for functional-002200: &rest.Config{Host:"https://127.0.0.1:49316", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff78e429080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:54:07.419732   10816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 04:54:07.424264   10816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:54:07.438954   10816 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1216 04:54:07.439621   10816 kubeadm.go:602] duration metric: took 119.279ms to restartPrimaryControlPlane
	I1216 04:54:07.439621   10816 kubeadm.go:403] duration metric: took 167.6444ms to StartCluster
	I1216 04:54:07.439621   10816 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.439755   10816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.440821   10816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.441789   10816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 04:54:07.441839   10816 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:54:07.442048   10816 addons.go:70] Setting storage-provisioner=true in profile "functional-002200"
	I1216 04:54:07.442048   10816 addons.go:70] Setting default-storageclass=true in profile "functional-002200"
	I1216 04:54:07.442130   10816 addons.go:239] Setting addon storage-provisioner=true in "functional-002200"
	I1216 04:54:07.442130   10816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-002200"
	I1216 04:54:07.442187   10816 host.go:66] Checking if "functional-002200" exists ...
	I1216 04:54:07.442187   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:54:07.445437   10816 out.go:179] * Verifying Kubernetes components...
	I1216 04:54:07.450118   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.450857   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.452175   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:07.507771   10816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.508167   10816 kapi.go:59] client config for functional-002200: &rest.Config{Host:"https://127.0.0.1:49316", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff78e429080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:54:07.508951   10816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:54:07.508951   10816 addons.go:239] Setting addon default-storageclass=true in "functional-002200"
	I1216 04:54:07.508951   10816 host.go:66] Checking if "functional-002200" exists ...
	I1216 04:54:07.517556   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.537496   10816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:07.540287   10816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:07.540287   10816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:54:07.546774   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.582442   10816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:07.582442   10816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:54:07.586285   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.606994   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:07.636962   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:07.645869   10816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:54:07.765470   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:07.777346   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:07.811577   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.866167   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 node_ready.go:35] waiting up to 6m0s for node "functional-002200" to be "Ready" ...
	W1216 04:54:07.869156   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 retry.go:31] will retry after 143.37804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.870154   10816 type.go:168] "Request Body" body=""
	I1216 04:54:07.870154   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	W1216 04:54:07.870154   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.870154   10816 retry.go:31] will retry after 150.951622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.872075   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:54:08.018062   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:08.025836   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:08.095508   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.099951   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.099951   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.099951   10816 retry.go:31] will retry after 537.200798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.103237   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.103772   10816 retry.go:31] will retry after 434.961679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.544092   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:08.626905   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.632935   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.632935   10816 retry.go:31] will retry after 617.835459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.641591   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:08.717034   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.721285   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.721336   10816 retry.go:31] will retry after 555.435942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.872382   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:08.872382   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:08.874726   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:09.256223   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:09.281163   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:09.337874   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:09.342648   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.342648   10816 retry.go:31] will retry after 1.171657048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.351506   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:09.353684   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.353684   10816 retry.go:31] will retry after 716.560141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.875116   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:09.875116   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:09.878246   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:10.075942   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:10.149131   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:10.153724   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.153724   10816 retry.go:31] will retry after 1.192910832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.520957   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:10.596120   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:10.600356   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.600356   10816 retry.go:31] will retry after 814.376196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.878697   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:10.879061   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:10.882391   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:11.351917   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:11.419047   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:11.435699   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:11.435794   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.435828   10816 retry.go:31] will retry after 2.202073619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.493635   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:11.497994   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.498062   10816 retry.go:31] will retry after 2.124694715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.883396   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:11.883898   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:11.886348   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:12.886583   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:12.886583   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:12.889839   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:13.629430   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:13.643127   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:13.719202   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:13.719202   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 retry.go:31] will retry after 3.773255134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:13.719202   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 retry.go:31] will retry after 2.024299182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.890150   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:13.890150   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:13.893004   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:14.893300   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:14.893707   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:14.896357   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:15.748924   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:15.832154   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:15.836153   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:15.836153   10816 retry.go:31] will retry after 4.710098408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:15.897470   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:15.897470   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:15.900560   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:16.900812   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:16.900812   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:16.904208   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:17.498553   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:17.582081   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:17.582134   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:17.582134   10816 retry.go:31] will retry after 4.959220117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:17.904607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:17.904607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:17.907482   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:17.907482   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:17.907482   10816 type.go:168] "Request Body" body=""
	I1216 04:54:17.907482   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:17.910186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:18.910930   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:18.910930   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:18.913636   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:19.913975   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:19.913975   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:19.917442   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:20.551463   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:20.635939   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:20.635939   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:20.635939   10816 retry.go:31] will retry after 7.302087091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:20.917543   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:20.917543   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:20.922152   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:21.922714   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:21.923090   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:21.925451   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:22.546716   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:22.623025   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:22.626750   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:22.626750   10816 retry.go:31] will retry after 6.831180284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:22.925790   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:22.925790   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:22.929352   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:23.930014   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:23.930092   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:23.932838   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:24.933846   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:24.934195   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:24.936622   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:25.937442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:25.937516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:25.940094   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:26.940283   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:26.940283   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:26.943747   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:27.943504   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:27.945094   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:27.945165   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:27.947573   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:27.947626   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:27.947734   10816 type.go:168] "Request Body" body=""
	I1216 04:54:27.947766   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:27.950140   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:28.023100   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:28.027085   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:28.027085   10816 retry.go:31] will retry after 8.693676062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:28.950523   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:28.950523   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:28.955399   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:29.463172   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:29.548936   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:29.548936   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:29.551954   10816 retry.go:31] will retry after 8.541447036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:29.956404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:29.956404   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:29.959065   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:30.959708   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:30.959708   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:30.963012   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:31.964093   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:31.964093   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:31.967555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:32.968057   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:32.968057   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:32.970609   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:33.971778   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:33.971778   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:33.975447   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:34.975764   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:34.975764   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:34.980867   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:54:35.981702   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:35.981702   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:35.985092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:36.726019   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:36.801339   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:36.806868   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:36.806868   10816 retry.go:31] will retry after 11.085665292s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:36.986076   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:36.986076   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:36.989365   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:37.990461   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:37.990461   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:37.994420   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:54:37.994494   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:37.994613   10816 type.go:168] "Request Body" body=""
	I1216 04:54:37.994697   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:37.996806   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:38.098931   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:38.175856   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:38.181908   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:38.181908   10816 retry.go:31] will retry after 20.635277746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:38.997597   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:38.997597   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:39.000931   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:40.001375   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:40.001375   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:40.004974   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:41.005192   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:41.005192   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:41.007919   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:42.009105   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:42.009105   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:42.012612   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:43.013312   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:43.013312   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:43.016575   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:44.017297   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:44.017297   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:44.020296   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:45.020698   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:45.020698   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:45.023875   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:46.024607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:46.024607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:46.027947   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:47.028088   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:47.028746   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:47.032023   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:47.898206   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:47.976246   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:47.980090   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:47.980090   10816 retry.go:31] will retry after 12.179357603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:48.033037   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:48.033037   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:48.035808   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:48.035808   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:48.035808   10816 type.go:168] "Request Body" body=""
	I1216 04:54:48.035808   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:48.040977   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:54:49.041226   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:49.041572   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:49.043632   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:50.044672   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:50.044672   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:50.048807   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:51.049032   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:51.049032   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:51.051895   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:52.052810   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:52.052810   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:52.056184   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:53.056422   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:53.056422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:53.059030   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:54.059750   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:54.060113   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:54.063020   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:55.063099   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:55.063099   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:55.066474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:56.066822   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:56.066822   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:56.071205   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:57.071421   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:57.071421   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:57.073734   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:58.073939   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:58.073939   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:58.076906   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:58.076906   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:58.076906   10816 type.go:168] "Request Body" body=""
	I1216 04:54:58.076906   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:58.081072   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:58.823241   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:58.903750   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:58.908134   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:58.908134   10816 retry.go:31] will retry after 21.057070222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:59.081704   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:59.082161   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:59.085119   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:00.085233   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:00.085233   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:00.088190   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:00.165511   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:55:00.236692   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:00.240478   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:00.240478   10816 retry.go:31] will retry after 25.698880398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:01.089206   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:01.089206   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:01.093274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:02.094123   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:02.094422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:02.097156   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:03.098295   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:03.098295   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:03.102257   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:04.103035   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:04.103035   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:04.106884   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:05.107465   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:05.107465   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:05.110542   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:06.112033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:06.112033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:06.114883   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:07.115061   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:07.115061   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:07.118200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:08.119287   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:08.119622   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:08.122289   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:55:08.122330   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:08.122429   10816 type.go:168] "Request Body" body=""
	I1216 04:55:08.122520   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:08.125754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:09.126342   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:09.126818   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:09.129086   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:10.129383   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:10.129722   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:10.133200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:11.134173   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:11.134173   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:11.136746   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:12.137338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:12.137338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:12.140387   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:13.140819   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:13.140819   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:13.144315   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:14.144624   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:14.144624   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:14.146619   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:15.148016   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:15.148016   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:15.150667   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:16.151188   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:16.151188   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:16.154512   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:17.154762   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:17.154762   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:17.157863   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:18.158498   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:18.158835   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:18.161129   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:55:18.161129   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:18.161666   10816 type.go:168] "Request Body" body=""
	I1216 04:55:18.161765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:18.165763   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:19.166375   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:19.166948   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:19.170530   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:19.970281   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:55:20.048987   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:20.052948   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:20.052948   10816 retry.go:31] will retry after 40.980819462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:20.171417   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:20.171417   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:20.174285   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:21.174459   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:21.174459   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:21.178349   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:22.178639   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:22.178639   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:22.182103   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:23.182373   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:23.182373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:23.186196   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:24.187572   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:24.187572   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:24.190721   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:25.191259   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:25.191259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:25.193863   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:25.945563   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:55:26.023336   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:26.027981   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:26.027981   10816 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:55:26.194033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:26.194033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:26.196611   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:27.198100   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:27.198100   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:27.201373   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:28.202260   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:28.202336   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:28.205520   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:28.205520   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:28.205520   10816 type.go:168] "Request Body" body=""
	I1216 04:55:28.205520   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:28.207479   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:29.208141   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:29.208141   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:29.210912   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:30.211277   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:30.211277   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:30.215183   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:31.215597   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:31.216087   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:31.220042   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:32.220845   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:32.220845   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:32.224468   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:33.225011   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:33.225011   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:33.227593   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:34.228072   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:34.228072   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:34.232200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:35.233142   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:35.233142   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:35.236555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:36.236770   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:36.236770   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:36.239805   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:37.240445   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:37.240445   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:37.244092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:38.245044   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:38.245410   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:38.248594   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:38.248691   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:38.248769   10816 type.go:168] "Request Body" body=""
	I1216 04:55:38.248876   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:38.250514   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:39.251245   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:39.251245   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:39.254671   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:40.255034   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:40.255034   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:40.258153   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:41.259367   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:41.259367   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:41.262425   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:42.263082   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:42.263082   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:42.266116   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:43.266829   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:43.266829   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:43.270506   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:44.270759   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:44.270759   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:44.273660   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:45.274478   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:45.274478   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:45.278771   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:46.279173   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:46.279173   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:46.282053   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:47.282933   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:47.283421   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:47.285798   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:48.286808   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:48.286808   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:48.289962   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:48.289962   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:48.289962   10816 type.go:168] "Request Body" body=""
	I1216 04:55:48.290487   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:48.292914   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:49.293355   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:49.293355   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:49.296159   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:50.296781   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:50.296781   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:50.300274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:51.301342   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:51.301765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:51.304219   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:52.305071   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:52.305533   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:52.309249   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:53.309491   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:53.309873   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:53.312736   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:54.313186   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:54.313186   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:54.315728   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:55.316291   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:55.316291   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:55.318644   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:56.319270   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:56.319270   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:56.322306   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:57.322583   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:57.322583   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:57.325852   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:58.326685   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:58.326685   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:58.330655   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:58.330655   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:58.330655   10816 type.go:168] "Request Body" body=""
	I1216 04:55:58.330655   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:58.332638   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:59.333608   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:59.333608   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:59.337440   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:00.338469   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:00.338469   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:00.342007   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:01.039745   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:56:01.115386   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:56:01.115386   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:56:01.115924   10816 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:56:01.120162   10816 out.go:179] * Enabled addons: 
	I1216 04:56:01.123251   10816 addons.go:530] duration metric: took 1m53.6807689s for enable addons: enabled=[]
	I1216 04:56:01.342137   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:01.342137   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:01.346975   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:02.347223   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:02.347223   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:02.350951   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:03.351725   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:03.351725   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:03.355059   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:04.356296   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:04.356615   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:04.358992   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:05.359518   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:05.359518   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:05.362516   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:06.363038   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:06.363038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:06.366125   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:07.367111   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:07.367481   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:07.371966   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:08.372166   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:08.372166   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:08.375468   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:08.375993   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:08.376095   10816 type.go:168] "Request Body" body=""
	I1216 04:56:08.376172   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:08.378089   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:56:09.378463   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:09.378463   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:09.381670   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:10.382441   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:10.382810   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:10.385502   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:11.386065   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:11.386065   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:11.389374   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:12.389965   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:12.390333   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:12.393342   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:13.393761   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:13.393761   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:13.397642   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:14.398827   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:14.399038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:14.401820   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:15.402491   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:15.402491   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:15.406054   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:16.406137   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:16.406137   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:16.409329   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:17.410259   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:17.410259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:17.414120   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:18.414404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:18.414404   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:18.417494   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:18.417494   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:18.417494   10816 type.go:168] "Request Body" body=""
	I1216 04:56:18.417494   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:18.420441   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:19.421425   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:19.421425   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:19.424513   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:20.425579   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:20.425579   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:20.428886   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:21.429285   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:21.429285   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:21.433045   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:22.433638   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:22.433638   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:22.436697   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:23.437015   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:23.437015   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:23.439787   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:24.440703   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:24.440703   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:24.444019   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:25.444311   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:25.444311   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:25.447609   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:26.447984   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:26.448512   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:26.452794   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:27.453187   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:27.453187   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:27.455976   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:28.456871   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:28.456871   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:28.461251   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1216 04:56:28.461251   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:28.461251   10816 type.go:168] "Request Body" body=""
	I1216 04:56:28.461251   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:28.463526   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:29.463858   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:29.464259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:29.466878   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:30.467194   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:30.467194   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:30.470413   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:31.471156   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:31.471156   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:31.474353   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:32.475039   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:32.475637   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:32.478555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:33.479704   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:33.479704   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:33.483474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:34.483723   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:34.483723   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:34.486979   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:35.487257   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:35.487257   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:35.491469   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:36.492018   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:36.492018   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:36.495190   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:37.495789   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:37.495789   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:37.500106   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:38.500394   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:38.500394   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:38.503378   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1216 04:56:38.503378   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:38.503599   10816 type.go:168] "Request Body" body=""
	I1216 04:56:38.503670   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:38.505160   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:56:39.506481   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:39.506804   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:39.510121   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:40.511348   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:40.511515   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:40.513938   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:41.514571   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:41.514571   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:41.517965   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:42.518471   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:42.518471   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:42.521751   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:43.521949   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:43.521949   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:43.525274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:44.525475   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:44.525475   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:44.529537   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:45.530250   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:45.530521   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:45.533288   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:46.533897   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:46.533897   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:46.537801   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:47.538390   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:47.538390   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:47.541816   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:48.542450   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:48.542450   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:48.546099   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:48.546175   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:48.546220   10816 type.go:168] "Request Body" body=""
	I1216 04:56:48.546387   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:48.549486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:49.549740   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:49.549740   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:49.552741   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:50.552975   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:50.552975   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:50.555719   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:51.556671   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:51.557087   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:51.559469   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:52.560456   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:52.560456   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:52.562873   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:53.564181   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:53.564582   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:53.567897   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:54.568380   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:54.568380   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:54.571311   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:55.571743   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:55.572099   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:55.575412   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:56.575643   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:56.575643   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:56.578246   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:57.579469   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:57.579837   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:57.582643   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:58.583174   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:58.583174   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:58.586391   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:58.586391   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:58.586391   10816 type.go:168] "Request Body" body=""
	I1216 04:56:58.586391   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:58.589558   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:59.589768   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:59.589768   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:59.592754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:00.593373   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:00.593373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:00.596016   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:01.596725   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:01.596725   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:01.600189   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:02.600353   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:02.600353   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:02.603717   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:03.604325   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:03.604325   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:03.607595   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:04.607869   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:04.607869   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:04.611932   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:05.612128   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:05.612128   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:05.615243   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:06.616295   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:06.616781   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:06.619760   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:07.620272   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:07.620272   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:07.623644   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:08.623726   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:08.624232   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:08.626961   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:57:08.626961   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:08.626961   10816 type.go:168] "Request Body" body=""
	I1216 04:57:08.626961   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:08.629859   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:09.630419   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:09.630419   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:09.633878   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:10.634244   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:10.634244   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:10.637456   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:11.637797   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:11.637797   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:11.641669   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:12.642380   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:12.642380   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:12.644941   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:13.645547   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:13.645547   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:13.649321   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:14.649513   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:14.649513   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:14.652510   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:15.652980   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:15.652980   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:15.656319   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:16.656586   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:16.656586   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:16.659754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:17.659826   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:17.659826   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:17.663603   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:18.664062   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:18.664062   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:18.667107   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:18.667107   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:18.667107   10816 type.go:168] "Request Body" body=""
	I1216 04:57:18.667107   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:18.669486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:19.670016   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:19.670016   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:19.672638   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:20.673464   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:20.673464   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:20.677620   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:21.678112   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:21.678112   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:21.681513   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:22.681689   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:22.681995   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:22.685092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:23.685629   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:23.685980   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:23.689156   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:24.689510   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:24.689510   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:24.692985   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:25.693807   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:25.693807   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:25.697191   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:26.697691   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:26.697691   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:26.701914   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:27.702516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:27.702516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:27.705661   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:28.706672   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:28.706672   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:28.709206   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:57:28.709740   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:28.709807   10816 type.go:168] "Request Body" body=""
	I1216 04:57:28.709807   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:28.711563   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:57:29.711944   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:29.712335   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:29.715833   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:30.716017   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:30.716017   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:30.718719   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:31.719441   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:31.719441   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:31.722783   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:32.722947   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:32.723366   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:32.726287   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:33.726757   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:33.726757   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:33.730225   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:34.730767   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:34.730767   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:34.734197   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:35.734516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:35.734516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:35.738082   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:36.738414   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:36.738414   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:36.741636   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:37.742028   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:37.742028   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:37.745720   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:38.746648   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:38.746648   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:38.750213   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:38.750735   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:38.750811   10816 type.go:168] "Request Body" body=""
	I1216 04:57:38.750811   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:38.753365   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:39.754170   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:39.754494   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:39.756672   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:40.757075   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:40.757075   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:40.760090   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:41.761085   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:41.761085   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:41.764167   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:42.764607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:42.764607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:42.767925   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:43.768223   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:43.768223   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:43.771724   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:44.772020   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:44.772318   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:44.775672   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:45.776480   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:45.776480   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:45.778942   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:46.779437   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:46.779437   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:46.782462   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:47.783516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:47.783516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:47.786792   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:48.787104   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:48.787104   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:48.790218   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:48.790218   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:48.790333   10816 type.go:168] "Request Body" body=""
	I1216 04:57:48.790436   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:48.792857   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:49.793117   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:49.793422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:49.796265   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:50.797034   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:50.797034   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:50.800135   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:51.800692   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:51.800692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:51.803658   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:52.804509   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:52.804920   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:52.807718   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:53.808691   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:53.808691   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:53.811500   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:54.812293   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:54.812293   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:54.815510   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:55.815794   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:55.815794   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:55.818451   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:56.819222   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:56.819222   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:56.822148   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:57.823367   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:57.823367   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:57.826238   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:58.827282   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:58.827282   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:58.831278   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:58.831278   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:58.831278   10816 type.go:168] "Request Body" body=""
	I1216 04:57:58.831278   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:58.834101   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:59.834865   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:59.834865   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:59.838005   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:00.838338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:00.838338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:00.842079   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:01.842320   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:01.842587   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:01.846536   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:02.846765   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:02.846765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:02.849370   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:03.850175   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:03.850175   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:03.853386   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:04.853868   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:04.854373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:04.857431   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:05.858201   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:05.858471   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:05.860804   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:06.862215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:06.862215   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:06.865083   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:07.865404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:07.865848   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:07.868243   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:08.868442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:08.868783   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:08.871646   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:08.871738   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:08.871913   10816 type.go:168] "Request Body" body=""
	I1216 04:58:08.872023   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:08.874694   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:09.875136   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:09.875136   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:09.878881   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:10.879915   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:10.880365   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:10.883263   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:11.883912   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:11.883912   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:11.887249   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:12.888328   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:12.888328   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:12.891295   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:13.891657   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:13.891657   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:13.895474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:14.896600   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:14.896600   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:14.900025   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:15.900244   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:15.900674   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:15.903477   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:16.903646   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:16.904044   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:16.906787   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:17.907771   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:17.908158   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:17.910577   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:18.911153   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:18.911153   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:18.914890   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:58:18.914948   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:18.914948   10816 type.go:168] "Request Body" body=""
	I1216 04:58:18.914948   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:18.917403   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:19.918088   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:19.918527   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:19.921232   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:20.921801   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:20.921801   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:20.925689   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:21.925981   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:21.925981   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:21.929421   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:22.929692   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:22.929692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:22.934085   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:23.934312   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:23.934757   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:23.937761   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:24.938769   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:24.939209   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:24.942444   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:25.943100   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:25.943100   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:25.945226   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:26.945701   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:26.946109   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:26.947829   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:27.948365   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:27.948365   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:27.951830   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:28.952454   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:28.952454   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:28.956623   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1216 04:58:28.956759   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:28.956909   10816 type.go:168] "Request Body" body=""
	I1216 04:58:28.956990   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:28.959476   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:29.960256   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:29.960546   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:29.963746   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:30.964110   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:30.964110   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:30.967396   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:31.967947   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:31.967947   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:31.971619   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:32.972256   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:32.972256   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:32.975092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:33.975992   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:33.975992   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:33.979330   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:34.979792   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:34.980275   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:34.985587   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:58:35.985861   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:35.985861   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:35.988919   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:36.989563   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:36.989563   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:36.993055   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:37.993776   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:37.993776   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:37.997175   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:38.998214   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:38.998214   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:39.001897   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:58:39.001897   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:39.001897   10816 type.go:168] "Request Body" body=""
	I1216 04:58:39.001897   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:39.006108   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:40.006288   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:40.006288   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:40.009323   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:41.009760   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:41.009760   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:41.013530   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:42.013827   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:42.013827   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:42.017014   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:43.018254   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:43.018254   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:43.020804   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:44.021283   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:44.021578   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:44.025175   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:45.025733   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:45.026038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:45.028762   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:46.029139   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:46.029139   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:46.032822   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:47.033121   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:47.033121   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:47.036186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:48.037338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:48.037338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:48.041634   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:49.041943   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:49.041943   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:49.044552   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:49.044552   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:49.045136   10816 type.go:168] "Request Body" body=""
	I1216 04:58:49.045179   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:49.047881   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:50.048858   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:50.049289   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:50.052681   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:51.053215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:51.053675   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:51.055662   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:52.056918   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:52.056918   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:52.060467   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:53.061555   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:53.061992   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:53.063425   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:54.065095   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:54.065095   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:54.067617   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:55.068285   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:55.068285   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:55.071811   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:56.072296   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:56.072296   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:56.074442   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:57.075200   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:57.075200   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:57.078550   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:58.079588   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:58.079588   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:58.082364   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:59.083252   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:59.083252   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:59.085627   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:59.085627   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:59.085627   10816 type.go:168] "Request Body" body=""
	I1216 04:58:59.085627   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:59.088880   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:00.089932   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:00.090292   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:00.093204   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:01.093501   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:01.093501   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:01.096419   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:02.096985   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:02.096985   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:02.099764   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:03.100341   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:03.100341   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:03.103928   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:04.103977   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:04.103977   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:04.107337   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:05.108232   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:05.108232   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:05.110967   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:06.112125   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:06.112125   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:06.115328   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:07.115765   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:07.115765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:07.119250   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:08.119457   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:08.119457   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:08.122449   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:09.122631   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:09.122631   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:09.125978   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:09.126506   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:09.126611   10816 type.go:168] "Request Body" body=""
	I1216 04:59:09.126692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:09.128714   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:10.129007   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:10.129007   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:10.132112   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:11.132462   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:11.132909   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:11.135945   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:12.136431   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:12.136431   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:12.139277   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:13.140319   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:13.140319   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:13.143791   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:14.144673   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:14.144969   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:14.147133   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:15.148066   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:15.148066   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:15.151666   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:16.152576   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:16.152576   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:16.155181   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:17.155710   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:17.155710   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:17.158668   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:18.159541   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:18.159541   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:18.163278   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:19.163911   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:19.163911   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:19.167509   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:19.167509   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:19.167509   10816 type.go:168] "Request Body" body=""
	I1216 04:59:19.167509   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:19.170448   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:20.170687   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:20.170687   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:20.173841   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:21.174586   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:21.174671   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:21.177173   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:22.177927   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:22.177927   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:22.181163   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:23.181445   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:23.181445   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:23.184486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:24.184984   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:24.184984   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:24.188169   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:25.189332   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:25.189332   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:25.192735   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:26.193626   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:26.193973   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:26.198186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:27.198396   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:27.198396   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:27.201696   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:28.202442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:28.202442   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:28.205986   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:29.206746   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:29.207127   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:29.209566   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:59:29.209566   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:29.209566   10816 type.go:168] "Request Body" body=""
	I1216 04:59:29.210103   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:29.212125   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:30.212524   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:30.212524   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:30.215655   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:31.216215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:31.216215   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:31.219690   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:32.220046   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:32.220046   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:32.223009   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:33.223314   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:33.223314   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:33.227018   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:34.227625   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:34.227625   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:34.230861   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:35.230966   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:35.230966   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:35.233871   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:36.234450   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:36.234450   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:36.238041   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:37.238279   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:37.238279   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:37.242076   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:38.242327   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:38.242667   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:38.244855   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:39.245186   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:39.245186   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:39.248453   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:39.248453   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:39.248453   10816 type.go:168] "Request Body" body=""
	I1216 04:59:39.248453   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:39.251221   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:40.252169   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:40.252169   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:40.255087   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:41.255519   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:41.255519   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:41.258620   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:42.258899   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:42.258899   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:42.262729   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:43.262828   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:43.263200   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:43.266061   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:44.266376   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:44.266376   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:44.269929   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:45.270664   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:45.270664   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:45.273706   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:46.274385   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:46.274490   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:46.277222   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:47.277605   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:47.277605   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:47.280855   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:48.281379   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:48.281379   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:48.284989   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:49.285064   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:49.285064   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:49.288248   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:49.288292   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:49.288292   10816 type.go:168] "Request Body" body=""
	I1216 04:59:49.288292   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:49.290985   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:50.292197   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:50.292197   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:50.295316   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:51.295720   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:51.295720   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:51.299727   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:59:52.299933   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:52.300336   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:52.302657   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:53.303447   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:53.303447   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:53.306915   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:54.307348   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:54.307348   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:54.311155   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:55.311730   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:55.311730   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:55.315225   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:56.315472   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:56.315472   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:56.318408   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:57.319302   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:57.319302   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:57.322311   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:58.323301   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:58.323301   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:58.326036   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:59.326779   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:59.327147   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:59.330755   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:59.330828   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:59.330946   10816 type.go:168] "Request Body" body=""
	I1216 04:59:59.331049   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:59.334070   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:00.334751   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:00.335172   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:00.337839   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:01.338521   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:01.338521   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:01.341452   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:02.342326   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:02.342746   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:02.345360   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:03.346006   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:03.346006   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:03.349240   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:04.349594   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:04.349594   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:04.352907   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:05.354033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:05.354033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:05.357772   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:06.357911   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:06.358319   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:06.360594   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:07.361136   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:07.361136   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:07.364543   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 05:00:07.871664   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 05:00:07.871664   10816 node_ready.go:38] duration metric: took 6m0.0002013s for node "functional-002200" to be "Ready" ...
	I1216 05:00:07.876577   10816 out.go:203] 
	W1216 05:00:07.879616   10816 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 05:00:07.879616   10816 out.go:285] * 
	* 
	W1216 05:00:07.881276   10816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:00:07.884672   10816 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-002200 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m9.7621321s for "functional-002200" cluster.
I1216 05:00:08.773037   11704 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 2 (583.8268ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.2144416s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ service        │ functional-902700 service hello-node --url --format={{.IP}}                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image          │ functional-902700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr     │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image save --daemon kicbase/echo-server:functional-902700 --alsologtostderr                           │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/11704.pem                                                                 │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /usr/share/ca-certificates/11704.pem                                                     │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/51391683.0                                                                │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/117042.pem                                                                │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /usr/share/ca-certificates/117042.pem                                                    │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/test/nested/copy/11704/hosts                                                        │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ update-context │ functional-902700 update-context --alsologtostderr -v=2                                                                 │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ update-context │ functional-902700 update-context --alsologtostderr -v=2                                                                 │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format short --alsologtostderr                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh pgrep buildkitd                                                                                   │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image          │ functional-902700 image build -t localhost/my-image:functional-902700 testdata\build --alsologtostderr                  │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ service        │ functional-902700 service hello-node --url                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image          │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format yaml --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format table --alsologtostderr                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format json --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ delete         │ -p functional-902700                                                                                                    │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │ 16 Dec 25 04:45 UTC │
	│ start          │ -p functional-002200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │                     │
	│ start          │ -p functional-002200 --alsologtostderr -v=8                                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:53 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:53:59
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:53:59.077529   10816 out.go:360] Setting OutFile to fd 1388 ...
	I1216 04:53:59.120079   10816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:53:59.120079   10816 out.go:374] Setting ErrFile to fd 1504...
	I1216 04:53:59.120079   10816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:53:59.134125   10816 out.go:368] Setting JSON to false
	I1216 04:53:59.136333   10816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1860,"bootTime":1765858978,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 04:53:59.136333   10816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 04:53:59.140588   10816 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 04:53:59.143257   10816 notify.go:221] Checking for updates...
	I1216 04:53:59.144338   10816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:53:59.146335   10816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:53:59.148852   10816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 04:53:59.153389   10816 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:53:59.155692   10816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:53:59.158810   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:53:59.158810   10816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:53:59.271386   10816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 04:53:59.275857   10816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:53:59.515409   10816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 04:53:59.497557869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:53:59.520423   10816 out.go:179] * Using the docker driver based on existing profile
	I1216 04:53:59.523406   10816 start.go:309] selected driver: docker
	I1216 04:53:59.523406   10816 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:53:59.523406   10816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:53:59.529406   10816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:53:59.757949   10816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 04:53:59.738153267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:53:59.838476   10816 cni.go:84] Creating CNI manager for ""
	I1216 04:53:59.838476   10816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:53:59.838997   10816 start.go:353] cluster config:
	{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:53:59.842569   10816 out.go:179] * Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	I1216 04:53:59.844586   10816 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 04:53:59.847541   10816 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:53:59.850024   10816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:53:59.850024   10816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:53:59.850184   10816 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 04:53:59.850253   10816 cache.go:65] Caching tarball of preloaded images
	I1216 04:53:59.850408   10816 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 04:53:59.850408   10816 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 04:53:59.850408   10816 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json ...
	I1216 04:53:59.925943   10816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 04:53:59.925943   10816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 04:53:59.926465   10816 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:53:59.926540   10816 start.go:360] acquireMachinesLock for functional-002200: {Name:mk1997a1c039b59133e065727b36e0e15b10eb3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:53:59.926717   10816 start.go:364] duration metric: took 124.8µs to acquireMachinesLock for "functional-002200"
	I1216 04:53:59.926803   10816 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:53:59.926803   10816 fix.go:54] fixHost starting: 
	I1216 04:53:59.933877   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:53:59.985861   10816 fix.go:112] recreateIfNeeded on functional-002200: state=Running err=<nil>
	W1216 04:53:59.986777   10816 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:53:59.990712   10816 out.go:252] * Updating the running docker "functional-002200" container ...
	I1216 04:53:59.990712   10816 machine.go:94] provisionDockerMachine start ...
	I1216 04:53:59.994611   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.050133   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.050702   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.050702   10816 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:54:00.224414   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:54:00.224414   10816 ubuntu.go:182] provisioning hostname "functional-002200"
	I1216 04:54:00.228183   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.284942   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.285440   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.285501   10816 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-002200 && echo "functional-002200" | sudo tee /etc/hostname
	I1216 04:54:00.466400   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:54:00.469396   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.520394   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.520394   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.521395   10816 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-002200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-002200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-002200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:54:00.690074   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:54:00.690074   10816 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 04:54:00.690074   10816 ubuntu.go:190] setting up certificates
	I1216 04:54:00.690074   10816 provision.go:84] configureAuth start
	I1216 04:54:00.694148   10816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:54:00.751989   10816 provision.go:143] copyHostCerts
	I1216 04:54:00.752186   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1216 04:54:00.752528   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 04:54:00.752557   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 04:54:00.752557   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 04:54:00.753298   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1216 04:54:00.753298   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 04:54:00.753298   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 04:54:00.754021   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 04:54:00.754554   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1216 04:54:00.754554   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 04:54:00.754554   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 04:54:00.755135   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 04:54:00.755694   10816 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-002200 san=[127.0.0.1 192.168.49.2 functional-002200 localhost minikube]
	I1216 04:54:00.834817   10816 provision.go:177] copyRemoteCerts
	I1216 04:54:00.838808   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:54:00.841808   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.896045   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:01.027660   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1216 04:54:01.027660   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:54:01.054957   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1216 04:54:01.054957   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:54:01.077598   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1216 04:54:01.077598   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:54:01.104237   10816 provision.go:87] duration metric: took 414.1604ms to configureAuth
	I1216 04:54:01.104237   10816 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:54:01.105157   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:54:01.110636   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.168864   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.169525   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.169551   10816 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 04:54:01.355861   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 04:54:01.355861   10816 ubuntu.go:71] root file system type: overlay
	I1216 04:54:01.355861   10816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 04:54:01.359632   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.417983   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.418643   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.418643   10816 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 04:54:01.607477   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 04:54:01.611072   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.665669   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.666241   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.666241   10816 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 04:54:01.838018   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:54:01.838065   10816 machine.go:97] duration metric: took 1.8473421s to provisionDockerMachine
	I1216 04:54:01.838112   10816 start.go:293] postStartSetup for "functional-002200" (driver="docker")
	I1216 04:54:01.838112   10816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:54:01.842730   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:54:01.845927   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.899710   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.030948   10816 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:54:02.037585   10816 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 04:54:02.037585   10816 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION_ID="12"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 04:54:02.037585   10816 command_runner.go:130] > ID=debian
	I1216 04:54:02.037585   10816 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 04:54:02.037585   10816 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 04:54:02.037585   10816 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 04:54:02.037585   10816 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:54:02.037585   10816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:54:02.037585   10816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 04:54:02.037585   10816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 04:54:02.038695   10816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 04:54:02.038739   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> /etc/ssl/certs/117042.pem
	I1216 04:54:02.039358   10816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> hosts in /etc/test/nested/copy/11704
	I1216 04:54:02.039390   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> /etc/test/nested/copy/11704/hosts
	I1216 04:54:02.043645   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11704
	I1216 04:54:02.054687   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 04:54:02.077250   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts --> /etc/test/nested/copy/11704/hosts (40 bytes)
	I1216 04:54:02.106199   10816 start.go:296] duration metric: took 268.0858ms for postStartSetup
	I1216 04:54:02.110518   10816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:54:02.114167   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.171516   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.294935   10816 command_runner.go:130] > 1%
	I1216 04:54:02.299449   10816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:54:02.309560   10816 command_runner.go:130] > 950G
	I1216 04:54:02.309560   10816 fix.go:56] duration metric: took 2.3827424s for fixHost
	I1216 04:54:02.309560   10816 start.go:83] releasing machines lock for "functional-002200", held for 2.3828036s
	I1216 04:54:02.313570   10816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:54:02.366171   10816 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 04:54:02.371688   10816 ssh_runner.go:195] Run: cat /version.json
	I1216 04:54:02.371747   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.373884   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.425495   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.428440   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.530908   10816 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1216 04:54:02.530908   10816 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 04:54:02.552908   10816 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1216 04:54:02.557959   10816 ssh_runner.go:195] Run: systemctl --version
	I1216 04:54:02.566291   10816 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 04:54:02.566291   10816 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 04:54:02.571531   10816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 04:54:02.582535   10816 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 04:54:02.582535   10816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:54:02.587977   10816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:54:02.599631   10816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:54:02.599684   10816 start.go:496] detecting cgroup driver to use...
	I1216 04:54:02.599733   10816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:54:02.599952   10816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:54:02.620915   10816 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1216 04:54:02.625275   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:54:02.642513   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:54:02.658404   10816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:54:02.664249   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:54:02.683612   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:54:02.703566   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:54:02.723114   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:54:02.741121   10816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:54:02.760533   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	W1216 04:54:02.771378   10816 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 04:54:02.771378   10816 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 04:54:02.781609   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:54:02.800465   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:54:02.819380   10816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:54:02.832241   10816 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 04:54:02.836457   10816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:54:02.854943   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:02.994394   10816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:54:03.139472   10816 start.go:496] detecting cgroup driver to use...
	I1216 04:54:03.139472   10816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:54:03.143391   10816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 04:54:03.162559   10816 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1216 04:54:03.162559   10816 command_runner.go:130] > [Unit]
	I1216 04:54:03.162559   10816 command_runner.go:130] > Description=Docker Application Container Engine
	I1216 04:54:03.162647   10816 command_runner.go:130] > Documentation=https://docs.docker.com
	I1216 04:54:03.162647   10816 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1216 04:54:03.162647   10816 command_runner.go:130] > Wants=network-online.target containerd.service
	I1216 04:54:03.162647   10816 command_runner.go:130] > Requires=docker.socket
	I1216 04:54:03.162689   10816 command_runner.go:130] > StartLimitBurst=3
	I1216 04:54:03.162689   10816 command_runner.go:130] > StartLimitIntervalSec=60
	I1216 04:54:03.162734   10816 command_runner.go:130] > [Service]
	I1216 04:54:03.162734   10816 command_runner.go:130] > Type=notify
	I1216 04:54:03.162734   10816 command_runner.go:130] > Restart=always
	I1216 04:54:03.162734   10816 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1216 04:54:03.162807   10816 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1216 04:54:03.162828   10816 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1216 04:54:03.162828   10816 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1216 04:54:03.162828   10816 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1216 04:54:03.162900   10816 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1216 04:54:03.162917   10816 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1216 04:54:03.162917   10816 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1216 04:54:03.162917   10816 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1216 04:54:03.162917   10816 command_runner.go:130] > ExecStart=
	I1216 04:54:03.162980   10816 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1216 04:54:03.162980   10816 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1216 04:54:03.163008   10816 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1216 04:54:03.163008   10816 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitNOFILE=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitNPROC=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitCORE=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1216 04:54:03.163065   10816 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1216 04:54:03.163065   10816 command_runner.go:130] > TasksMax=infinity
	I1216 04:54:03.163065   10816 command_runner.go:130] > TimeoutStartSec=0
	I1216 04:54:03.163065   10816 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1216 04:54:03.163112   10816 command_runner.go:130] > Delegate=yes
	I1216 04:54:03.163112   10816 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1216 04:54:03.163112   10816 command_runner.go:130] > KillMode=process
	I1216 04:54:03.163112   10816 command_runner.go:130] > OOMScoreAdjust=-500
	I1216 04:54:03.163112   10816 command_runner.go:130] > [Install]
	I1216 04:54:03.163112   10816 command_runner.go:130] > WantedBy=multi-user.target
	I1216 04:54:03.167400   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:54:03.188934   10816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:54:03.279029   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:54:03.300208   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:54:03.316692   10816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:54:03.338834   10816 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1216 04:54:03.343609   10816 ssh_runner.go:195] Run: which cri-dockerd
	I1216 04:54:03.350066   10816 command_runner.go:130] > /usr/bin/cri-dockerd
	I1216 04:54:03.355212   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 04:54:03.369229   10816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 04:54:03.392646   10816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 04:54:03.524584   10816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 04:54:03.661458   10816 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 04:54:03.661598   10816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 04:54:03.685520   10816 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 04:54:03.708589   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:03.845683   10816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 04:54:04.645791   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:54:04.667182   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 04:54:04.690401   10816 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 04:54:04.718176   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:54:04.738992   10816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 04:54:04.903819   10816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 04:54:05.034592   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:05.166883   10816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 04:54:05.190738   10816 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 04:54:05.211273   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:05.344748   10816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 04:54:05.446097   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:54:05.463790   10816 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 04:54:05.471347   10816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 04:54:05.478565   10816 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1216 04:54:05.478565   10816 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 04:54:05.478565   10816 command_runner.go:130] > Device: 0,112	Inode: 1751        Links: 1
	I1216 04:54:05.478565   10816 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1216 04:54:05.478565   10816 command_runner.go:130] > Access: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] > Modify: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] > Change: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] >  Birth: -
	I1216 04:54:05.478565   10816 start.go:564] Will wait 60s for crictl version
	I1216 04:54:05.482816   10816 ssh_runner.go:195] Run: which crictl
	I1216 04:54:05.491459   10816 command_runner.go:130] > /usr/local/bin/crictl
	I1216 04:54:05.496033   10816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:54:05.533167   10816 command_runner.go:130] > Version:  0.1.0
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeName:  docker
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 04:54:05.533167   10816 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 04:54:05.536709   10816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:54:05.572362   10816 command_runner.go:130] > 29.1.3
	I1216 04:54:05.576856   10816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:54:05.612780   10816 command_runner.go:130] > 29.1.3
	I1216 04:54:05.616153   10816 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 04:54:05.619706   10816 cli_runner.go:164] Run: docker exec -t functional-002200 dig +short host.docker.internal
	I1216 04:54:05.740410   10816 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 04:54:05.744411   10816 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 04:54:05.751410   10816 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1216 04:54:05.754417   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:05.810199   10816 kubeadm.go:884] updating cluster {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:54:05.810199   10816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:54:05.814984   10816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1216 04:54:05.850393   10816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:05.850393   10816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:54:05.850393   10816 docker.go:621] Images already preloaded, skipping extraction
	I1216 04:54:05.852935   10816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1216 04:54:05.887286   10816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:05.887286   10816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:54:05.887286   10816 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:54:05.887286   10816 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1216 04:54:05.887286   10816 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-002200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:54:05.890789   10816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 04:54:05.960191   10816 command_runner.go:130] > cgroupfs
	I1216 04:54:05.960191   10816 cni.go:84] Creating CNI manager for ""
	I1216 04:54:05.960191   10816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:54:05.960191   10816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:54:05.960723   10816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-002200 NodeName:functional-002200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:54:05.960947   10816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-002200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:54:05.964962   10816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubeadm
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubectl
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubelet
	I1216 04:54:05.978770   10816 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:54:05.983615   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:54:05.994290   10816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 04:54:06.017936   10816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:54:06.036718   10816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1216 04:54:06.060901   10816 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:54:06.072426   10816 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 04:54:06.077308   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:06.213746   10816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:54:06.308797   10816 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200 for IP: 192.168.49.2
	I1216 04:54:06.308797   10816 certs.go:195] generating shared ca certs ...
	I1216 04:54:06.308797   10816 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:06.310511   10816 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 04:54:06.310511   10816 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 04:54:06.310511   10816 certs.go:257] generating profile certs ...
	I1216 04:54:06.311535   10816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key
	I1216 04:54:06.311853   10816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742
	I1216 04:54:06.312156   10816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key
	I1216 04:54:06.312187   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 04:54:06.312277   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1216 04:54:06.312360   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 04:54:06.312444   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 04:54:06.312580   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 04:54:06.312673   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 04:54:06.312777   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 04:54:06.312890   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 04:54:06.313261   10816 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 04:54:06.313921   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 04:54:06.314135   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 04:54:06.314531   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 04:54:06.314719   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.314759   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.314759   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem -> /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.315394   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:54:06.342547   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:54:06.368689   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:54:06.393638   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:54:06.418640   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:54:06.453759   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 04:54:06.476256   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:54:06.500532   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:54:06.524928   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 04:54:06.552508   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:54:06.575232   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 04:54:06.598894   10816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:54:06.620996   10816 ssh_runner.go:195] Run: openssl version
	I1216 04:54:06.631676   10816 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 04:54:06.636278   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.653246   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:54:06.670292   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.677576   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.677653   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.681684   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.724946   10816 command_runner.go:130] > b5213941
	I1216 04:54:06.729462   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:54:06.747149   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.764470   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 04:54:06.780610   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.787541   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.787541   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.791611   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.834505   10816 command_runner.go:130] > 51391683
	I1216 04:54:06.839668   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:54:06.856437   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.871735   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 04:54:06.888873   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.895775   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.895828   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.900176   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.943961   10816 command_runner.go:130] > 3ec20f2e
	I1216 04:54:06.948620   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:54:06.964812   10816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:54:06.978768   10816 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:54:06.978768   10816 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 04:54:06.978768   10816 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1216 04:54:06.978768   10816 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:54:06.978768   10816 command_runner.go:130] > Access: 2025-12-16 04:49:55.262290705 +0000
	I1216 04:54:06.978768   10816 command_runner.go:130] > Modify: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.978768   10816 command_runner.go:130] > Change: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.978868   10816 command_runner.go:130] >  Birth: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.982552   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:54:07.026352   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.030610   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:54:07.075026   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.079065   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:54:07.126638   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.131687   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:54:07.174667   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.179083   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:54:07.222822   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.227385   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:54:07.271975   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.271975   10816 kubeadm.go:401] StartCluster: {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:54:07.276330   10816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 04:54:07.308756   10816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:54:07.320226   10816 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 04:54:07.320260   10816 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 04:54:07.320260   10816 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 04:54:07.320341   10816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:54:07.320341   10816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:54:07.325132   10816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:54:07.336047   10816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:54:07.339740   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.398431   10816 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-002200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.399021   10816 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-002200" cluster setting kubeconfig missing "functional-002200" context setting]
	I1216 04:54:07.399534   10816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.418099   10816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.418579   10816 kapi.go:59] client config for functional-002200: &rest.Config{Host:"https://127.0.0.1:49316", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff78e429080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:54:07.419732   10816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 04:54:07.424264   10816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:54:07.438954   10816 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1216 04:54:07.439621   10816 kubeadm.go:602] duration metric: took 119.279ms to restartPrimaryControlPlane
	I1216 04:54:07.439621   10816 kubeadm.go:403] duration metric: took 167.6444ms to StartCluster
	I1216 04:54:07.439621   10816 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.439755   10816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.440821   10816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.441789   10816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 04:54:07.441839   10816 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:54:07.442048   10816 addons.go:70] Setting storage-provisioner=true in profile "functional-002200"
	I1216 04:54:07.442048   10816 addons.go:70] Setting default-storageclass=true in profile "functional-002200"
	I1216 04:54:07.442130   10816 addons.go:239] Setting addon storage-provisioner=true in "functional-002200"
	I1216 04:54:07.442130   10816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-002200"
	I1216 04:54:07.442187   10816 host.go:66] Checking if "functional-002200" exists ...
	I1216 04:54:07.442187   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:54:07.445437   10816 out.go:179] * Verifying Kubernetes components...
	I1216 04:54:07.450118   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.450857   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.452175   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:07.507771   10816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.508167   10816 kapi.go:59] client config for functional-002200: &rest.Config{Host:"https://127.0.0.1:49316", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff78e429080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:54:07.508951   10816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:54:07.508951   10816 addons.go:239] Setting addon default-storageclass=true in "functional-002200"
	I1216 04:54:07.508951   10816 host.go:66] Checking if "functional-002200" exists ...
	I1216 04:54:07.517556   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.537496   10816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:07.540287   10816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:07.540287   10816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:54:07.546774   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.582442   10816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:07.582442   10816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:54:07.586285   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.606994   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:07.636962   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:07.645869   10816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:54:07.765470   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:07.777346   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:07.811577   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.866167   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 node_ready.go:35] waiting up to 6m0s for node "functional-002200" to be "Ready" ...
	W1216 04:54:07.869156   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 retry.go:31] will retry after 143.37804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.870154   10816 type.go:168] "Request Body" body=""
	I1216 04:54:07.870154   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	W1216 04:54:07.870154   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.870154   10816 retry.go:31] will retry after 150.951622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.872075   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:54:08.018062   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:08.025836   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:08.095508   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.099951   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.099951   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.099951   10816 retry.go:31] will retry after 537.200798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.103237   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.103772   10816 retry.go:31] will retry after 434.961679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.544092   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:08.626905   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.632935   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.632935   10816 retry.go:31] will retry after 617.835459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.641591   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:08.717034   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.721285   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.721336   10816 retry.go:31] will retry after 555.435942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.872382   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:08.872382   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:08.874726   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:09.256223   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:09.281163   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:09.337874   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:09.342648   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.342648   10816 retry.go:31] will retry after 1.171657048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.351506   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:09.353684   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.353684   10816 retry.go:31] will retry after 716.560141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.875116   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:09.875116   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:09.878246   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:10.075942   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:10.149131   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:10.153724   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.153724   10816 retry.go:31] will retry after 1.192910832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.520957   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:10.596120   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:10.600356   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.600356   10816 retry.go:31] will retry after 814.376196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.878697   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:10.879061   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:10.882391   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:11.351917   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:11.419047   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:11.435699   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:11.435794   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.435828   10816 retry.go:31] will retry after 2.202073619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.493635   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:11.497994   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.498062   10816 retry.go:31] will retry after 2.124694715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.883396   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:11.883898   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:11.886348   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:12.886583   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:12.886583   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:12.889839   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:13.629430   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:13.643127   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:13.719202   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:13.719202   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 retry.go:31] will retry after 3.773255134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:13.719202   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 retry.go:31] will retry after 2.024299182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.890150   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:13.890150   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:13.893004   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:14.893300   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:14.893707   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:14.896357   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:15.748924   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:15.832154   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:15.836153   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:15.836153   10816 retry.go:31] will retry after 4.710098408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:15.897470   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:15.897470   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:15.900560   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:16.900812   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:16.900812   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:16.904208   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:17.498553   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:17.582081   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:17.582134   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:17.582134   10816 retry.go:31] will retry after 4.959220117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:17.904607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:17.904607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:17.907482   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:17.907482   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:17.907482   10816 type.go:168] "Request Body" body=""
	I1216 04:54:17.907482   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:17.910186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:18.910930   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:18.910930   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:18.913636   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:19.913975   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:19.913975   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:19.917442   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:20.551463   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:20.635939   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:20.635939   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:20.635939   10816 retry.go:31] will retry after 7.302087091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:20.917543   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:20.917543   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:20.922152   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:21.922714   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:21.923090   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:21.925451   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:22.546716   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:22.623025   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:22.626750   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:22.626750   10816 retry.go:31] will retry after 6.831180284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:22.925790   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:22.925790   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:22.929352   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:23.930014   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:23.930092   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:23.932838   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:24.933846   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:24.934195   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:24.936622   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:25.937442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:25.937516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:25.940094   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:26.940283   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:26.940283   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:26.943747   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:27.943504   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:27.945094   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:27.945165   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:27.947573   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:27.947626   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:27.947734   10816 type.go:168] "Request Body" body=""
	I1216 04:54:27.947766   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:27.950140   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:28.023100   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:28.027085   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:28.027085   10816 retry.go:31] will retry after 8.693676062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:28.950523   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:28.950523   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:28.955399   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:29.463172   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:29.548936   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:29.548936   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:29.551954   10816 retry.go:31] will retry after 8.541447036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:29.956404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:29.956404   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:29.959065   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:30.959708   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:30.959708   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:30.963012   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:31.964093   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:31.964093   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:31.967555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:32.968057   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:32.968057   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:32.970609   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:33.971778   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:33.971778   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:33.975447   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:34.975764   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:34.975764   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:34.980867   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:54:35.981702   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:35.981702   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:35.985092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:36.726019   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:36.801339   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:36.806868   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:36.806868   10816 retry.go:31] will retry after 11.085665292s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:36.986076   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:36.986076   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:36.989365   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:37.990461   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:37.990461   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:37.994420   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:54:37.994494   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:37.994613   10816 type.go:168] "Request Body" body=""
	I1216 04:54:37.994697   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:37.996806   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:38.098931   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:38.175856   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:38.181908   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:38.181908   10816 retry.go:31] will retry after 20.635277746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:38.997597   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:38.997597   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:39.000931   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:40.001375   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:40.001375   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:40.004974   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:41.005192   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:41.005192   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:41.007919   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:42.009105   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:42.009105   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:42.012612   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:43.013312   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:43.013312   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:43.016575   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:44.017297   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:44.017297   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:44.020296   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:45.020698   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:45.020698   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:45.023875   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:46.024607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:46.024607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:46.027947   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:47.028088   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:47.028746   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:47.032023   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:47.898206   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:47.976246   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:47.980090   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:47.980090   10816 retry.go:31] will retry after 12.179357603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:48.033037   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:48.033037   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:48.035808   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:48.035808   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:48.035808   10816 type.go:168] "Request Body" body=""
	I1216 04:54:48.035808   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:48.040977   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:54:49.041226   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:49.041572   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:49.043632   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:50.044672   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:50.044672   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:50.048807   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:51.049032   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:51.049032   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:51.051895   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:52.052810   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:52.052810   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:52.056184   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:53.056422   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:53.056422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:53.059030   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:54.059750   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:54.060113   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:54.063020   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:55.063099   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:55.063099   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:55.066474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:56.066822   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:56.066822   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:56.071205   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:57.071421   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:57.071421   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:57.073734   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:58.073939   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:58.073939   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:58.076906   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:58.076906   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:58.076906   10816 type.go:168] "Request Body" body=""
	I1216 04:54:58.076906   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:58.081072   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:58.823241   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:58.903750   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:58.908134   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:58.908134   10816 retry.go:31] will retry after 21.057070222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:59.081704   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:59.082161   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:59.085119   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:00.085233   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:00.085233   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:00.088190   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:00.165511   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:55:00.236692   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:00.240478   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:00.240478   10816 retry.go:31] will retry after 25.698880398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:01.089206   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:01.089206   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:01.093274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:02.094123   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:02.094422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:02.097156   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:03.098295   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:03.098295   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:03.102257   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:04.103035   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:04.103035   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:04.106884   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:05.107465   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:05.107465   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:05.110542   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:06.112033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:06.112033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:06.114883   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:07.115061   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:07.115061   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:07.118200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:08.119287   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:08.119622   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:08.122289   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:55:08.122330   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:08.122429   10816 type.go:168] "Request Body" body=""
	I1216 04:55:08.122520   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:08.125754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:09.126342   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:09.126818   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:09.129086   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:10.129383   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:10.129722   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:10.133200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:11.134173   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:11.134173   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:11.136746   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:12.137338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:12.137338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:12.140387   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:13.140819   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:13.140819   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:13.144315   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:14.144624   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:14.144624   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:14.146619   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:15.148016   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:15.148016   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:15.150667   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:16.151188   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:16.151188   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:16.154512   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:17.154762   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:17.154762   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:17.157863   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:18.158498   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:18.158835   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:18.161129   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:55:18.161129   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:18.161666   10816 type.go:168] "Request Body" body=""
	I1216 04:55:18.161765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:18.165763   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:19.166375   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:19.166948   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:19.170530   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:19.970281   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:55:20.048987   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:20.052948   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:20.052948   10816 retry.go:31] will retry after 40.980819462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:20.171417   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:20.171417   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:20.174285   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:21.174459   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:21.174459   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:21.178349   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:22.178639   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:22.178639   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:22.182103   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:23.182373   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:23.182373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:23.186196   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:24.187572   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:24.187572   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:24.190721   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:25.191259   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:25.191259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:25.193863   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:25.945563   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:55:26.023336   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:26.027981   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:26.027981   10816 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:55:26.194033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:26.194033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:26.196611   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:27.198100   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:27.198100   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:27.201373   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:28.202260   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:28.202336   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:28.205520   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:28.205520   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:28.205520   10816 type.go:168] "Request Body" body=""
	I1216 04:55:28.205520   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:28.207479   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:29.208141   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:29.208141   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:29.210912   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:30.211277   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:30.211277   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:30.215183   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:31.215597   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:31.216087   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:31.220042   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:32.220845   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:32.220845   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:32.224468   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:33.225011   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:33.225011   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:33.227593   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:34.228072   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:34.228072   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:34.232200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:35.233142   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:35.233142   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:35.236555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:36.236770   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:36.236770   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:36.239805   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:37.240445   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:37.240445   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:37.244092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:38.245044   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:38.245410   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:38.248594   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:38.248691   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:38.248769   10816 type.go:168] "Request Body" body=""
	I1216 04:55:38.248876   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:38.250514   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:39.251245   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:39.251245   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:39.254671   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:40.255034   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:40.255034   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:40.258153   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:41.259367   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:41.259367   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:41.262425   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:42.263082   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:42.263082   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:42.266116   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:43.266829   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:43.266829   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:43.270506   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:44.270759   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:44.270759   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:44.273660   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:45.274478   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:45.274478   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:45.278771   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:46.279173   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:46.279173   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:46.282053   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:47.282933   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:47.283421   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:47.285798   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:48.286808   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:48.286808   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:48.289962   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:48.289962   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:48.289962   10816 type.go:168] "Request Body" body=""
	I1216 04:55:48.290487   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:48.292914   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:49.293355   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:49.293355   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:49.296159   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:50.296781   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:50.296781   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:50.300274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:51.301342   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:51.301765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:51.304219   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:52.305071   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:52.305533   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:52.309249   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:53.309491   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:53.309873   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:53.312736   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:54.313186   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:54.313186   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:54.315728   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:55.316291   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:55.316291   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:55.318644   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:56.319270   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:56.319270   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:56.322306   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:57.322583   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:57.322583   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:57.325852   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:58.326685   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:58.326685   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:58.330655   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:58.330655   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:58.330655   10816 type.go:168] "Request Body" body=""
	I1216 04:55:58.330655   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:58.332638   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:59.333608   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:59.333608   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:59.337440   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:00.338469   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:00.338469   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:00.342007   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:01.039745   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:56:01.115386   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:56:01.115386   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:56:01.115924   10816 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:56:01.120162   10816 out.go:179] * Enabled addons: 
	I1216 04:56:01.123251   10816 addons.go:530] duration metric: took 1m53.6807689s for enable addons: enabled=[]
	I1216 04:56:01.342137   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:01.342137   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:01.346975   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:02.347223   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:02.347223   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:02.350951   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:03.351725   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:03.351725   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:03.355059   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:04.356296   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:04.356615   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:04.358992   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:05.359518   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:05.359518   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:05.362516   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:06.363038   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:06.363038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:06.366125   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:07.367111   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:07.367481   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:07.371966   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:08.372166   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:08.372166   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:08.375468   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:08.375993   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:08.376095   10816 type.go:168] "Request Body" body=""
	I1216 04:56:08.376172   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:08.378089   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:56:09.378463   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:09.378463   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:09.381670   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:10.382441   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:10.382810   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:10.385502   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:11.386065   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:11.386065   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:11.389374   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:12.389965   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:12.390333   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:12.393342   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:13.393761   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:13.393761   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:13.397642   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:14.398827   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:14.399038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:14.401820   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:15.402491   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:15.402491   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:15.406054   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:16.406137   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:16.406137   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:16.409329   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:17.410259   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:17.410259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:17.414120   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:18.414404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:18.414404   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:18.417494   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:18.417494   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:18.417494   10816 type.go:168] "Request Body" body=""
	I1216 04:56:18.417494   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:18.420441   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:19.421425   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:19.421425   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:19.424513   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:20.425579   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:20.425579   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:20.428886   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:21.429285   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:21.429285   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:21.433045   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:22.433638   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:22.433638   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:22.436697   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:23.437015   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:23.437015   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:23.439787   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:24.440703   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:24.440703   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:24.444019   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:25.444311   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:25.444311   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:25.447609   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:26.447984   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:26.448512   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:26.452794   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:27.453187   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:27.453187   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:27.455976   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:28.456871   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:28.456871   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:28.461251   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1216 04:56:28.461251   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:28.461251   10816 type.go:168] "Request Body" body=""
	I1216 04:56:28.461251   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:28.463526   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:29.463858   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:29.464259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:29.466878   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:30.467194   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:30.467194   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:30.470413   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:31.471156   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:31.471156   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:31.474353   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:32.475039   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:32.475637   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:32.478555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:33.479704   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:33.479704   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:33.483474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:34.483723   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:34.483723   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:34.486979   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:35.487257   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:35.487257   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:35.491469   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:36.492018   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:36.492018   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:36.495190   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:37.495789   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:37.495789   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:37.500106   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:38.500394   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:38.500394   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:38.503378   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1216 04:56:38.503378   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:38.503599   10816 type.go:168] "Request Body" body=""
	I1216 04:56:38.503670   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:38.505160   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:56:39.506481   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:39.506804   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:39.510121   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:40.511348   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:40.511515   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:40.513938   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:41.514571   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:41.514571   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:41.517965   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:42.518471   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:42.518471   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:42.521751   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:43.521949   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:43.521949   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:43.525274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:44.525475   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:44.525475   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:44.529537   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:45.530250   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:45.530521   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:45.533288   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:46.533897   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:46.533897   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:46.537801   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:47.538390   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:47.538390   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:47.541816   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:48.542450   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:48.542450   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:48.546099   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:48.546175   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:48.546220   10816 type.go:168] "Request Body" body=""
	I1216 04:56:48.546387   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:48.549486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:49.549740   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:49.549740   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:49.552741   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:50.552975   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:50.552975   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:50.555719   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:51.556671   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:51.557087   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:51.559469   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:52.560456   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:52.560456   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:52.562873   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:53.564181   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:53.564582   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:53.567897   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:54.568380   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:54.568380   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:54.571311   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:55.571743   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:55.572099   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:55.575412   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:56.575643   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:56.575643   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:56.578246   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:57.579469   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:57.579837   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:57.582643   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:58.583174   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:58.583174   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:58.586391   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:58.586391   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:58.586391   10816 type.go:168] "Request Body" body=""
	I1216 04:56:58.586391   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:58.589558   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:59.589768   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:59.589768   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:59.592754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:00.593373   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:00.593373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:00.596016   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:01.596725   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:01.596725   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:01.600189   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:02.600353   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:02.600353   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:02.603717   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:03.604325   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:03.604325   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:03.607595   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:04.607869   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:04.607869   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:04.611932   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:05.612128   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:05.612128   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:05.615243   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:06.616295   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:06.616781   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:06.619760   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:07.620272   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:07.620272   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:07.623644   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:08.623726   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:08.624232   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:08.626961   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:57:08.626961   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:08.626961   10816 type.go:168] "Request Body" body=""
	I1216 04:57:08.626961   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:08.629859   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:09.630419   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:09.630419   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:09.633878   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:10.634244   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:10.634244   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:10.637456   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:11.637797   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:11.637797   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:11.641669   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:12.642380   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:12.642380   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:12.644941   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:13.645547   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:13.645547   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:13.649321   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:14.649513   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:14.649513   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:14.652510   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:15.652980   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:15.652980   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:15.656319   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:16.656586   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:16.656586   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:16.659754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:17.659826   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:17.659826   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:17.663603   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:18.664062   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:18.664062   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:18.667107   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:18.667107   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:18.667107   10816 type.go:168] "Request Body" body=""
	I1216 04:57:18.667107   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:18.669486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:19.670016   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:19.670016   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:19.672638   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:20.673464   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:20.673464   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:20.677620   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:21.678112   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:21.678112   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:21.681513   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:22.681689   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:22.681995   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:22.685092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:23.685629   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:23.685980   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:23.689156   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:24.689510   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:24.689510   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:24.692985   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:25.693807   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:25.693807   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:25.697191   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:26.697691   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:26.697691   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:26.701914   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:27.702516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:27.702516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:27.705661   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:28.706672   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:28.706672   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:28.709206   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:57:28.709740   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:28.709807   10816 type.go:168] "Request Body" body=""
	I1216 04:57:28.709807   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:28.711563   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:57:29.711944   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:29.712335   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:29.715833   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:30.716017   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:30.716017   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:30.718719   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:31.719441   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:31.719441   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:31.722783   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:32.722947   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:32.723366   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:32.726287   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:33.726757   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:33.726757   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:33.730225   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:34.730767   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:34.730767   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:34.734197   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:35.734516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:35.734516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:35.738082   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:36.738414   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:36.738414   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:36.741636   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:37.742028   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:37.742028   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:37.745720   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:38.746648   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:38.746648   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:38.750213   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:38.750735   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:38.750811   10816 type.go:168] "Request Body" body=""
	I1216 04:57:38.750811   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:38.753365   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:39.754170   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:39.754494   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:39.756672   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:40.757075   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:40.757075   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:40.760090   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:41.761085   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:41.761085   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:41.764167   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:42.764607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:42.764607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:42.767925   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:43.768223   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:43.768223   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:43.771724   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:44.772020   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:44.772318   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:44.775672   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:45.776480   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:45.776480   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:45.778942   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:46.779437   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:46.779437   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:46.782462   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:47.783516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:47.783516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:47.786792   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:48.787104   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:48.787104   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:48.790218   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:48.790218   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:48.790333   10816 type.go:168] "Request Body" body=""
	I1216 04:57:48.790436   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:48.792857   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:49.793117   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:49.793422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:49.796265   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:50.797034   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:50.797034   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:50.800135   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:51.800692   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:51.800692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:51.803658   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:52.804509   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:52.804920   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:52.807718   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:53.808691   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:53.808691   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:53.811500   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:54.812293   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:54.812293   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:54.815510   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:55.815794   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:55.815794   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:55.818451   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:56.819222   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:56.819222   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:56.822148   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:57.823367   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:57.823367   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:57.826238   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:58.827282   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:58.827282   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:58.831278   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:58.831278   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:58.831278   10816 type.go:168] "Request Body" body=""
	I1216 04:57:58.831278   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:58.834101   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:59.834865   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:59.834865   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:59.838005   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:00.838338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:00.838338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:00.842079   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:01.842320   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:01.842587   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:01.846536   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:02.846765   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:02.846765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:02.849370   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:03.850175   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:03.850175   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:03.853386   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:04.853868   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:04.854373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:04.857431   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:05.858201   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:05.858471   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:05.860804   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:06.862215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:06.862215   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:06.865083   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:07.865404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:07.865848   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:07.868243   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:08.868442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:08.868783   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:08.871646   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:08.871738   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:08.871913   10816 type.go:168] "Request Body" body=""
	I1216 04:58:08.872023   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:08.874694   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:09.875136   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:09.875136   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:09.878881   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:10.879915   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:10.880365   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:10.883263   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:11.883912   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:11.883912   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:11.887249   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:12.888328   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:12.888328   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:12.891295   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:13.891657   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:13.891657   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:13.895474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:14.896600   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:14.896600   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:14.900025   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:15.900244   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:15.900674   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:15.903477   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:16.903646   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:16.904044   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:16.906787   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:17.907771   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:17.908158   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:17.910577   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:18.911153   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:18.911153   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:18.914890   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:58:18.914948   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:18.914948   10816 type.go:168] "Request Body" body=""
	I1216 04:58:18.914948   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:18.917403   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:19.918088   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:19.918527   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:19.921232   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:20.921801   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:20.921801   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:20.925689   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:21.925981   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:21.925981   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:21.929421   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:22.929692   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:22.929692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:22.934085   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:23.934312   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:23.934757   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:23.937761   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:24.938769   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:24.939209   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:24.942444   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:25.943100   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:25.943100   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:25.945226   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:26.945701   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:26.946109   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:26.947829   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:27.948365   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:27.948365   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:27.951830   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:28.952454   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:28.952454   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:28.956623   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1216 04:58:28.956759   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:28.956909   10816 type.go:168] "Request Body" body=""
	I1216 04:58:28.956990   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:28.959476   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:29.960256   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:29.960546   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:29.963746   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:30.964110   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:30.964110   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:30.967396   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:31.967947   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:31.967947   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:31.971619   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:32.972256   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:32.972256   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:32.975092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:33.975992   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:33.975992   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:33.979330   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:34.979792   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:34.980275   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:34.985587   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:58:35.985861   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:35.985861   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:35.988919   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:36.989563   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:36.989563   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:36.993055   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:37.993776   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:37.993776   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:37.997175   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:38.998214   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:38.998214   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:39.001897   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:58:39.001897   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:39.001897   10816 type.go:168] "Request Body" body=""
	I1216 04:58:39.001897   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:39.006108   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:40.006288   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:40.006288   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:40.009323   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:41.009760   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:41.009760   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:41.013530   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:42.013827   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:42.013827   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:42.017014   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:43.018254   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:43.018254   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:43.020804   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:44.021283   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:44.021578   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:44.025175   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:45.025733   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:45.026038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:45.028762   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:46.029139   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:46.029139   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:46.032822   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:47.033121   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:47.033121   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:47.036186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:48.037338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:48.037338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:48.041634   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:49.041943   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:49.041943   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:49.044552   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:49.044552   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:49.045136   10816 type.go:168] "Request Body" body=""
	I1216 04:58:49.045179   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:49.047881   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:50.048858   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:50.049289   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:50.052681   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:51.053215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:51.053675   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:51.055662   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:52.056918   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:52.056918   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:52.060467   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:53.061555   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:53.061992   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:53.063425   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:54.065095   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:54.065095   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:54.067617   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:55.068285   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:55.068285   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:55.071811   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:56.072296   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:56.072296   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:56.074442   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:57.075200   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:57.075200   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:57.078550   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:58.079588   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:58.079588   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:58.082364   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:59.083252   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:59.083252   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:59.085627   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:59.085627   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:59.085627   10816 type.go:168] "Request Body" body=""
	I1216 04:58:59.085627   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:59.088880   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:00.089932   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:00.090292   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:00.093204   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:01.093501   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:01.093501   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:01.096419   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:02.096985   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:02.096985   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:02.099764   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:03.100341   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:03.100341   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:03.103928   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:04.103977   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:04.103977   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:04.107337   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:05.108232   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:05.108232   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:05.110967   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:06.112125   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:06.112125   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:06.115328   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:07.115765   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:07.115765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:07.119250   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:08.119457   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:08.119457   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:08.122449   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:09.122631   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:09.122631   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:09.125978   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:09.126506   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:09.126611   10816 type.go:168] "Request Body" body=""
	I1216 04:59:09.126692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:09.128714   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:10.129007   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:10.129007   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:10.132112   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:11.132462   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:11.132909   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:11.135945   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:12.136431   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:12.136431   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:12.139277   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:13.140319   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:13.140319   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:13.143791   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:14.144673   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:14.144969   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:14.147133   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:15.148066   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:15.148066   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:15.151666   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:16.152576   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:16.152576   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:16.155181   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:17.155710   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:17.155710   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:17.158668   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:18.159541   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:18.159541   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:18.163278   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:19.163911   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:19.163911   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:19.167509   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:19.167509   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:19.167509   10816 type.go:168] "Request Body" body=""
	I1216 04:59:19.167509   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:19.170448   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:20.170687   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:20.170687   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:20.173841   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:21.174586   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:21.174671   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:21.177173   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:22.177927   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:22.177927   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:22.181163   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:23.181445   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:23.181445   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:23.184486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:24.184984   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:24.184984   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:24.188169   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:25.189332   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:25.189332   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:25.192735   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:26.193626   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:26.193973   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:26.198186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:27.198396   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:27.198396   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:27.201696   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:28.202442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:28.202442   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:28.205986   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:29.206746   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:29.207127   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:29.209566   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:59:29.209566   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:29.209566   10816 type.go:168] "Request Body" body=""
	I1216 04:59:29.210103   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:29.212125   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:30.212524   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:30.212524   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:30.215655   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:31.216215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:31.216215   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:31.219690   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:32.220046   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:32.220046   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:32.223009   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:33.223314   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:33.223314   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:33.227018   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:34.227625   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:34.227625   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:34.230861   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:35.230966   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:35.230966   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:35.233871   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:36.234450   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:36.234450   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:36.238041   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:37.238279   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:37.238279   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:37.242076   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:38.242327   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:38.242667   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:38.244855   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:39.245186   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:39.245186   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:39.248453   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:39.248453   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:39.248453   10816 type.go:168] "Request Body" body=""
	I1216 04:59:39.248453   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:39.251221   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:40.252169   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:40.252169   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:40.255087   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:41.255519   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:41.255519   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:41.258620   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:42.258899   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:42.258899   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:42.262729   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:43.262828   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:43.263200   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:43.266061   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:44.266376   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:44.266376   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:44.269929   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:45.270664   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:45.270664   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:45.273706   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:46.274385   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:46.274490   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:46.277222   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:47.277605   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:47.277605   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:47.280855   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:48.281379   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:48.281379   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:48.284989   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:49.285064   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:49.285064   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:49.288248   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:49.288292   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:49.288292   10816 type.go:168] "Request Body" body=""
	I1216 04:59:49.288292   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:49.290985   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:50.292197   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:50.292197   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:50.295316   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:51.295720   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:51.295720   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:51.299727   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:59:52.299933   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:52.300336   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:52.302657   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:53.303447   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:53.303447   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:53.306915   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:54.307348   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:54.307348   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:54.311155   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:55.311730   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:55.311730   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:55.315225   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:56.315472   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:56.315472   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:56.318408   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:57.319302   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:57.319302   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:57.322311   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:58.323301   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:58.323301   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:58.326036   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:59.326779   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:59.327147   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:59.330755   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:59.330828   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:59.330946   10816 type.go:168] "Request Body" body=""
	I1216 04:59:59.331049   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:59.334070   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:00.334751   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:00.335172   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:00.337839   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:01.338521   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:01.338521   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:01.341452   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:02.342326   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:02.342746   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:02.345360   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:03.346006   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:03.346006   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:03.349240   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:04.349594   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:04.349594   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:04.352907   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:05.354033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:05.354033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:05.357772   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:06.357911   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:06.358319   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:06.360594   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:07.361136   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:07.361136   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:07.364543   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 05:00:07.871664   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 05:00:07.871664   10816 node_ready.go:38] duration metric: took 6m0.0002013s for node "functional-002200" to be "Ready" ...
	I1216 05:00:07.876577   10816 out.go:203] 
	W1216 05:00:07.879616   10816 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 05:00:07.879616   10816 out.go:285] * 
	W1216 05:00:07.881276   10816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:00:07.884672   10816 out.go:203] 
	
	
	==> Docker <==
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532904868Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532910769Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532962273Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.533000176Z" level=info msg="Initializing buildkit"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.632934284Z" level=info msg="Completed buildkit initialization"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638730325Z" level=info msg="Daemon has completed initialization"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638930540Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 04:54:04 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638973643Z" level=info msg="API listen on [::]:2376"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638987344Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 04:54:04 functional-002200 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 04:54:04 functional-002200 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 16 04:54:04 functional-002200 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 04:54:05 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Loaded network plugin cni"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 04:54:05 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:00:10.525020   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:00:10.526163   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:00:10.527703   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:00:10.529528   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:00:10.530632   17477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001061] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001041] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001007] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000838] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001072] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 04:54] CPU: 8 PID: 53756 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001140] RIP: 0033:0x7f1fa5473b20
	[  +0.000543] Code: Unable to access opcode bytes at RIP 0x7f1fa5473af6.
	[  +0.001042] RSP: 002b:00007ffde8c4f290 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000944] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001046] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000944] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001149] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000795] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000802] FS:  0000000000000000 GS:  0000000000000000
	[  +0.814553] CPU: 10 PID: 53882 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000797] RIP: 0033:0x7f498f339b20
	[  +0.000391] Code: Unable to access opcode bytes at RIP 0x7f498f339af6.
	[  +0.000625] RSP: 002b:00007ffc77d465d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000824] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000761] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000760] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000766] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000779] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000780] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:00:10 up 36 min,  0 user,  load average: 0.48, 0.42, 0.60
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:00:07 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:00:08 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 816.
	Dec 16 05:00:08 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:00:08 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:00:08 functional-002200 kubelet[17310]: E1216 05:00:08.099916   17310 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:00:08 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:00:08 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:00:08 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 817.
	Dec 16 05:00:08 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:00:08 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:00:08 functional-002200 kubelet[17324]: E1216 05:00:08.897768   17324 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:00:08 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:00:08 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:00:09 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 818.
	Dec 16 05:00:09 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:00:09 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:00:09 functional-002200 kubelet[17353]: E1216 05:00:09.600950   17353 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:00:09 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:00:09 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:00:10 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 819.
	Dec 16 05:00:10 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:00:10 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:00:10 functional-002200 kubelet[17421]: E1216 05:00:10.352679   17421 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:00:10 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:00:10 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (596.5882ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (373.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (53.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-002200 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-002200 get po -A: exit status 1 (50.3887752s)

                                                
                                                
** stderr ** 
	E1216 05:00:22.424275    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:00:32.469021    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:00:42.510338    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:00:52.551767    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:01:02.593829    7924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-002200 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1216 05:00:22.424275    7924 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:49316/api?timeout=32s\\\": EOF\"\nE1216 05:00:32.469021    7924 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:49316/api?timeout=32s\\\": EOF\"\nE1216 05:00:42.510338    7924 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:49316/api?timeout=32s\\\": EOF\"\nE1216 05:00:52.551767    7924 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:49316/api?timeout=32s\\\": EOF\"\nE1216 05:01:02.593829    7924 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:49316/api?timeout=32s\\\": EOF\"\nUnable to connect to the server: EOF\n"*: args "kubectl --context functio
nal-002200 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-002200 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 2 (595.0984ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.1460673s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ service        │ functional-902700 service hello-node --url --format={{.IP}}                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image          │ functional-902700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr     │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image save --daemon kicbase/echo-server:functional-902700 --alsologtostderr                           │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/11704.pem                                                                 │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /usr/share/ca-certificates/11704.pem                                                     │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/51391683.0                                                                │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/117042.pem                                                                │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /usr/share/ca-certificates/117042.pem                                                    │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh sudo cat /etc/test/nested/copy/11704/hosts                                                        │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ update-context │ functional-902700 update-context --alsologtostderr -v=2                                                                 │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ update-context │ functional-902700 update-context --alsologtostderr -v=2                                                                 │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format short --alsologtostderr                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ ssh            │ functional-902700 ssh pgrep buildkitd                                                                                   │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image          │ functional-902700 image build -t localhost/my-image:functional-902700 testdata\build --alsologtostderr                  │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ service        │ functional-902700 service hello-node --url                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image          │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format yaml --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format table --alsologtostderr                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image          │ functional-902700 image ls --format json --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ delete         │ -p functional-902700                                                                                                    │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │ 16 Dec 25 04:45 UTC │
	│ start          │ -p functional-002200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │                     │
	│ start          │ -p functional-002200 --alsologtostderr -v=8                                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:53 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:53:59
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:53:59.077529   10816 out.go:360] Setting OutFile to fd 1388 ...
	I1216 04:53:59.120079   10816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:53:59.120079   10816 out.go:374] Setting ErrFile to fd 1504...
	I1216 04:53:59.120079   10816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:53:59.134125   10816 out.go:368] Setting JSON to false
	I1216 04:53:59.136333   10816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1860,"bootTime":1765858978,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 04:53:59.136333   10816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 04:53:59.140588   10816 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 04:53:59.143257   10816 notify.go:221] Checking for updates...
	I1216 04:53:59.144338   10816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:53:59.146335   10816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:53:59.148852   10816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 04:53:59.153389   10816 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:53:59.155692   10816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:53:59.158810   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:53:59.158810   10816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:53:59.271386   10816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 04:53:59.275857   10816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:53:59.515409   10816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 04:53:59.497557869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:53:59.520423   10816 out.go:179] * Using the docker driver based on existing profile
	I1216 04:53:59.523406   10816 start.go:309] selected driver: docker
	I1216 04:53:59.523406   10816 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:53:59.523406   10816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:53:59.529406   10816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:53:59.757949   10816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 04:53:59.738153267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:53:59.838476   10816 cni.go:84] Creating CNI manager for ""
	I1216 04:53:59.838476   10816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:53:59.838997   10816 start.go:353] cluster config:
	{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:53:59.842569   10816 out.go:179] * Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	I1216 04:53:59.844586   10816 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 04:53:59.847541   10816 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:53:59.850024   10816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:53:59.850024   10816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:53:59.850184   10816 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 04:53:59.850253   10816 cache.go:65] Caching tarball of preloaded images
	I1216 04:53:59.850408   10816 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 04:53:59.850408   10816 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 04:53:59.850408   10816 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json ...
	I1216 04:53:59.925943   10816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 04:53:59.925943   10816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 04:53:59.926465   10816 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:53:59.926540   10816 start.go:360] acquireMachinesLock for functional-002200: {Name:mk1997a1c039b59133e065727b36e0e15b10eb3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:53:59.926717   10816 start.go:364] duration metric: took 124.8µs to acquireMachinesLock for "functional-002200"
	I1216 04:53:59.926803   10816 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:53:59.926803   10816 fix.go:54] fixHost starting: 
	I1216 04:53:59.933877   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:53:59.985861   10816 fix.go:112] recreateIfNeeded on functional-002200: state=Running err=<nil>
	W1216 04:53:59.986777   10816 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:53:59.990712   10816 out.go:252] * Updating the running docker "functional-002200" container ...
	I1216 04:53:59.990712   10816 machine.go:94] provisionDockerMachine start ...
	I1216 04:53:59.994611   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.050133   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.050702   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.050702   10816 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:54:00.224414   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:54:00.224414   10816 ubuntu.go:182] provisioning hostname "functional-002200"
	I1216 04:54:00.228183   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.284942   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.285440   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.285501   10816 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-002200 && echo "functional-002200" | sudo tee /etc/hostname
	I1216 04:54:00.466400   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:54:00.469396   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.520394   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.520394   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.521395   10816 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-002200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-002200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-002200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:54:00.690074   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:54:00.690074   10816 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 04:54:00.690074   10816 ubuntu.go:190] setting up certificates
	I1216 04:54:00.690074   10816 provision.go:84] configureAuth start
	I1216 04:54:00.694148   10816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:54:00.751989   10816 provision.go:143] copyHostCerts
	I1216 04:54:00.752186   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1216 04:54:00.752528   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 04:54:00.752557   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 04:54:00.752557   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 04:54:00.753298   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1216 04:54:00.753298   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 04:54:00.753298   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 04:54:00.754021   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 04:54:00.754554   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1216 04:54:00.754554   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 04:54:00.754554   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 04:54:00.755135   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 04:54:00.755694   10816 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-002200 san=[127.0.0.1 192.168.49.2 functional-002200 localhost minikube]
	I1216 04:54:00.834817   10816 provision.go:177] copyRemoteCerts
	I1216 04:54:00.838808   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:54:00.841808   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.896045   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:01.027660   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1216 04:54:01.027660   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:54:01.054957   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1216 04:54:01.054957   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:54:01.077598   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1216 04:54:01.077598   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:54:01.104237   10816 provision.go:87] duration metric: took 414.1604ms to configureAuth
	I1216 04:54:01.104237   10816 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:54:01.105157   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:54:01.110636   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.168864   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.169525   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.169551   10816 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 04:54:01.355861   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 04:54:01.355861   10816 ubuntu.go:71] root file system type: overlay
	I1216 04:54:01.355861   10816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 04:54:01.359632   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.417983   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.418643   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.418643   10816 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 04:54:01.607477   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 04:54:01.611072   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.665669   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.666241   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.666241   10816 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 04:54:01.838018   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:54:01.838065   10816 machine.go:97] duration metric: took 1.8473421s to provisionDockerMachine
	I1216 04:54:01.838112   10816 start.go:293] postStartSetup for "functional-002200" (driver="docker")
	I1216 04:54:01.838112   10816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:54:01.842730   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:54:01.845927   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.899710   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.030948   10816 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:54:02.037585   10816 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 04:54:02.037585   10816 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION_ID="12"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 04:54:02.037585   10816 command_runner.go:130] > ID=debian
	I1216 04:54:02.037585   10816 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 04:54:02.037585   10816 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 04:54:02.037585   10816 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 04:54:02.037585   10816 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:54:02.037585   10816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:54:02.037585   10816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 04:54:02.037585   10816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 04:54:02.038695   10816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 04:54:02.038739   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> /etc/ssl/certs/117042.pem
	I1216 04:54:02.039358   10816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> hosts in /etc/test/nested/copy/11704
	I1216 04:54:02.039390   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> /etc/test/nested/copy/11704/hosts
	I1216 04:54:02.043645   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11704
	I1216 04:54:02.054687   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 04:54:02.077250   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts --> /etc/test/nested/copy/11704/hosts (40 bytes)
	I1216 04:54:02.106199   10816 start.go:296] duration metric: took 268.0858ms for postStartSetup
	I1216 04:54:02.110518   10816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:54:02.114167   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.171516   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.294935   10816 command_runner.go:130] > 1%
	I1216 04:54:02.299449   10816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:54:02.309560   10816 command_runner.go:130] > 950G
	I1216 04:54:02.309560   10816 fix.go:56] duration metric: took 2.3827424s for fixHost
	I1216 04:54:02.309560   10816 start.go:83] releasing machines lock for "functional-002200", held for 2.3828036s
	I1216 04:54:02.313570   10816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:54:02.366171   10816 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 04:54:02.371688   10816 ssh_runner.go:195] Run: cat /version.json
	I1216 04:54:02.371747   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.373884   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.425495   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.428440   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.530908   10816 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1216 04:54:02.530908   10816 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 04:54:02.552908   10816 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1216 04:54:02.557959   10816 ssh_runner.go:195] Run: systemctl --version
	I1216 04:54:02.566291   10816 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 04:54:02.566291   10816 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 04:54:02.571531   10816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 04:54:02.582535   10816 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 04:54:02.582535   10816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:54:02.587977   10816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:54:02.599631   10816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:54:02.599684   10816 start.go:496] detecting cgroup driver to use...
	I1216 04:54:02.599733   10816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:54:02.599952   10816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:54:02.620915   10816 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1216 04:54:02.625275   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:54:02.642513   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:54:02.658404   10816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:54:02.664249   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:54:02.683612   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:54:02.703566   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:54:02.723114   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:54:02.741121   10816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:54:02.760533   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	W1216 04:54:02.771378   10816 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 04:54:02.771378   10816 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 04:54:02.781609   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:54:02.800465   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:54:02.819380   10816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:54:02.832241   10816 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 04:54:02.836457   10816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:54:02.854943   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:02.994394   10816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:54:03.139472   10816 start.go:496] detecting cgroup driver to use...
	I1216 04:54:03.139472   10816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:54:03.143391   10816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 04:54:03.162559   10816 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1216 04:54:03.162559   10816 command_runner.go:130] > [Unit]
	I1216 04:54:03.162559   10816 command_runner.go:130] > Description=Docker Application Container Engine
	I1216 04:54:03.162647   10816 command_runner.go:130] > Documentation=https://docs.docker.com
	I1216 04:54:03.162647   10816 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1216 04:54:03.162647   10816 command_runner.go:130] > Wants=network-online.target containerd.service
	I1216 04:54:03.162647   10816 command_runner.go:130] > Requires=docker.socket
	I1216 04:54:03.162689   10816 command_runner.go:130] > StartLimitBurst=3
	I1216 04:54:03.162689   10816 command_runner.go:130] > StartLimitIntervalSec=60
	I1216 04:54:03.162734   10816 command_runner.go:130] > [Service]
	I1216 04:54:03.162734   10816 command_runner.go:130] > Type=notify
	I1216 04:54:03.162734   10816 command_runner.go:130] > Restart=always
	I1216 04:54:03.162734   10816 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1216 04:54:03.162807   10816 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1216 04:54:03.162828   10816 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1216 04:54:03.162828   10816 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1216 04:54:03.162828   10816 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1216 04:54:03.162900   10816 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1216 04:54:03.162917   10816 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1216 04:54:03.162917   10816 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1216 04:54:03.162917   10816 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1216 04:54:03.162917   10816 command_runner.go:130] > ExecStart=
	I1216 04:54:03.162980   10816 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1216 04:54:03.162980   10816 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1216 04:54:03.163008   10816 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1216 04:54:03.163008   10816 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitNOFILE=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitNPROC=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitCORE=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1216 04:54:03.163065   10816 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1216 04:54:03.163065   10816 command_runner.go:130] > TasksMax=infinity
	I1216 04:54:03.163065   10816 command_runner.go:130] > TimeoutStartSec=0
	I1216 04:54:03.163065   10816 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1216 04:54:03.163112   10816 command_runner.go:130] > Delegate=yes
	I1216 04:54:03.163112   10816 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1216 04:54:03.163112   10816 command_runner.go:130] > KillMode=process
	I1216 04:54:03.163112   10816 command_runner.go:130] > OOMScoreAdjust=-500
	I1216 04:54:03.163112   10816 command_runner.go:130] > [Install]
	I1216 04:54:03.163112   10816 command_runner.go:130] > WantedBy=multi-user.target
	I1216 04:54:03.167400   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:54:03.188934   10816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:54:03.279029   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:54:03.300208   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:54:03.316692   10816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:54:03.338834   10816 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1216 04:54:03.343609   10816 ssh_runner.go:195] Run: which cri-dockerd
	I1216 04:54:03.350066   10816 command_runner.go:130] > /usr/bin/cri-dockerd
	I1216 04:54:03.355212   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 04:54:03.369229   10816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 04:54:03.392646   10816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 04:54:03.524584   10816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 04:54:03.661458   10816 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 04:54:03.661598   10816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 04:54:03.685520   10816 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 04:54:03.708589   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:03.845683   10816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 04:54:04.645791   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:54:04.667182   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 04:54:04.690401   10816 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 04:54:04.718176   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:54:04.738992   10816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 04:54:04.903819   10816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 04:54:05.034592   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:05.166883   10816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 04:54:05.190738   10816 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 04:54:05.211273   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:05.344748   10816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 04:54:05.446097   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:54:05.463790   10816 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 04:54:05.471347   10816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 04:54:05.478565   10816 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1216 04:54:05.478565   10816 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 04:54:05.478565   10816 command_runner.go:130] > Device: 0,112	Inode: 1751        Links: 1
	I1216 04:54:05.478565   10816 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1216 04:54:05.478565   10816 command_runner.go:130] > Access: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] > Modify: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] > Change: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] >  Birth: -
	I1216 04:54:05.478565   10816 start.go:564] Will wait 60s for crictl version
	I1216 04:54:05.482816   10816 ssh_runner.go:195] Run: which crictl
	I1216 04:54:05.491459   10816 command_runner.go:130] > /usr/local/bin/crictl
	I1216 04:54:05.496033   10816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:54:05.533167   10816 command_runner.go:130] > Version:  0.1.0
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeName:  docker
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 04:54:05.533167   10816 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 04:54:05.536709   10816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:54:05.572362   10816 command_runner.go:130] > 29.1.3
	I1216 04:54:05.576856   10816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:54:05.612780   10816 command_runner.go:130] > 29.1.3
	I1216 04:54:05.616153   10816 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 04:54:05.619706   10816 cli_runner.go:164] Run: docker exec -t functional-002200 dig +short host.docker.internal
	I1216 04:54:05.740410   10816 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 04:54:05.744411   10816 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 04:54:05.751410   10816 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1216 04:54:05.754417   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:05.810199   10816 kubeadm.go:884] updating cluster {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:54:05.810199   10816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:54:05.814984   10816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1216 04:54:05.850393   10816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:05.850393   10816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:54:05.850393   10816 docker.go:621] Images already preloaded, skipping extraction
	I1216 04:54:05.852935   10816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1216 04:54:05.887286   10816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:05.887286   10816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:54:05.887286   10816 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:54:05.887286   10816 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1216 04:54:05.887286   10816 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-002200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:54:05.890789   10816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 04:54:05.960191   10816 command_runner.go:130] > cgroupfs
	I1216 04:54:05.960191   10816 cni.go:84] Creating CNI manager for ""
	I1216 04:54:05.960191   10816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:54:05.960191   10816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:54:05.960723   10816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-002200 NodeName:functional-002200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:54:05.960947   10816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-002200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:54:05.964962   10816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubeadm
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubectl
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubelet
	I1216 04:54:05.978770   10816 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:54:05.983615   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:54:05.994290   10816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 04:54:06.017936   10816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:54:06.036718   10816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1216 04:54:06.060901   10816 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:54:06.072426   10816 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 04:54:06.077308   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:06.213746   10816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:54:06.308797   10816 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200 for IP: 192.168.49.2
	I1216 04:54:06.308797   10816 certs.go:195] generating shared ca certs ...
	I1216 04:54:06.308797   10816 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:06.310511   10816 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 04:54:06.310511   10816 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 04:54:06.310511   10816 certs.go:257] generating profile certs ...
	I1216 04:54:06.311535   10816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key
	I1216 04:54:06.311853   10816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742
	I1216 04:54:06.312156   10816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key
	I1216 04:54:06.312187   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 04:54:06.312277   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1216 04:54:06.312360   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 04:54:06.312444   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 04:54:06.312580   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 04:54:06.312673   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 04:54:06.312777   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 04:54:06.312890   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 04:54:06.313261   10816 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 04:54:06.313921   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 04:54:06.314135   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 04:54:06.314531   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 04:54:06.314719   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.314759   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.314759   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem -> /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.315394   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:54:06.342547   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:54:06.368689   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:54:06.393638   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:54:06.418640   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:54:06.453759   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 04:54:06.476256   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:54:06.500532   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:54:06.524928   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 04:54:06.552508   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:54:06.575232   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 04:54:06.598894   10816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:54:06.620996   10816 ssh_runner.go:195] Run: openssl version
	I1216 04:54:06.631676   10816 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 04:54:06.636278   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.653246   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:54:06.670292   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.677576   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.677653   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.681684   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.724946   10816 command_runner.go:130] > b5213941
	I1216 04:54:06.729462   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:54:06.747149   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.764470   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 04:54:06.780610   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.787541   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.787541   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.791611   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.834505   10816 command_runner.go:130] > 51391683
	I1216 04:54:06.839668   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:54:06.856437   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.871735   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 04:54:06.888873   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.895775   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.895828   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.900176   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.943961   10816 command_runner.go:130] > 3ec20f2e
	I1216 04:54:06.948620   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:54:06.964812   10816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:54:06.978768   10816 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:54:06.978768   10816 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 04:54:06.978768   10816 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1216 04:54:06.978768   10816 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:54:06.978768   10816 command_runner.go:130] > Access: 2025-12-16 04:49:55.262290705 +0000
	I1216 04:54:06.978768   10816 command_runner.go:130] > Modify: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.978768   10816 command_runner.go:130] > Change: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.978868   10816 command_runner.go:130] >  Birth: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.982552   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:54:07.026352   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.030610   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:54:07.075026   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.079065   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:54:07.126638   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.131687   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:54:07.174667   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.179083   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:54:07.222822   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.227385   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:54:07.271975   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.271975   10816 kubeadm.go:401] StartCluster: {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:54:07.276330   10816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 04:54:07.308756   10816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:54:07.320226   10816 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 04:54:07.320260   10816 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 04:54:07.320260   10816 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 04:54:07.320341   10816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:54:07.320341   10816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:54:07.325132   10816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:54:07.336047   10816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:54:07.339740   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.398431   10816 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-002200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.399021   10816 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-002200" cluster setting kubeconfig missing "functional-002200" context setting]
	I1216 04:54:07.399534   10816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.418099   10816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.418579   10816 kapi.go:59] client config for functional-002200: &rest.Config{Host:"https://127.0.0.1:49316", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff78e429080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:54:07.419732   10816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 04:54:07.424264   10816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:54:07.438954   10816 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1216 04:54:07.439621   10816 kubeadm.go:602] duration metric: took 119.279ms to restartPrimaryControlPlane
	I1216 04:54:07.439621   10816 kubeadm.go:403] duration metric: took 167.6444ms to StartCluster
	I1216 04:54:07.439621   10816 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.439755   10816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.440821   10816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.441789   10816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 04:54:07.441839   10816 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:54:07.442048   10816 addons.go:70] Setting storage-provisioner=true in profile "functional-002200"
	I1216 04:54:07.442048   10816 addons.go:70] Setting default-storageclass=true in profile "functional-002200"
	I1216 04:54:07.442130   10816 addons.go:239] Setting addon storage-provisioner=true in "functional-002200"
	I1216 04:54:07.442130   10816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-002200"
	I1216 04:54:07.442187   10816 host.go:66] Checking if "functional-002200" exists ...
	I1216 04:54:07.442187   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:54:07.445437   10816 out.go:179] * Verifying Kubernetes components...
	I1216 04:54:07.450118   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.450857   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.452175   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:07.507771   10816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.508167   10816 kapi.go:59] client config for functional-002200: &rest.Config{Host:"https://127.0.0.1:49316", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff78e429080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:54:07.508951   10816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:54:07.508951   10816 addons.go:239] Setting addon default-storageclass=true in "functional-002200"
	I1216 04:54:07.508951   10816 host.go:66] Checking if "functional-002200" exists ...
	I1216 04:54:07.517556   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.537496   10816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:07.540287   10816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:07.540287   10816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:54:07.546774   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.582442   10816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:07.582442   10816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:54:07.586285   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.606994   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:07.636962   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:07.645869   10816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:54:07.765470   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:07.777346   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:07.811577   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.866167   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 node_ready.go:35] waiting up to 6m0s for node "functional-002200" to be "Ready" ...
	W1216 04:54:07.869156   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 retry.go:31] will retry after 143.37804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.870154   10816 type.go:168] "Request Body" body=""
	I1216 04:54:07.870154   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	W1216 04:54:07.870154   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.870154   10816 retry.go:31] will retry after 150.951622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.872075   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:54:08.018062   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:08.025836   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:08.095508   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.099951   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.099951   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.099951   10816 retry.go:31] will retry after 537.200798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.103237   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.103772   10816 retry.go:31] will retry after 434.961679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.544092   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:08.626905   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.632935   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.632935   10816 retry.go:31] will retry after 617.835459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.641591   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:08.717034   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.721285   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.721336   10816 retry.go:31] will retry after 555.435942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.872382   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:08.872382   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:08.874726   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:09.256223   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:09.281163   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:09.337874   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:09.342648   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.342648   10816 retry.go:31] will retry after 1.171657048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.351506   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:09.353684   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.353684   10816 retry.go:31] will retry after 716.560141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.875116   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:09.875116   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:09.878246   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:10.075942   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:10.149131   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:10.153724   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.153724   10816 retry.go:31] will retry after 1.192910832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.520957   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:10.596120   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:10.600356   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.600356   10816 retry.go:31] will retry after 814.376196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.878697   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:10.879061   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:10.882391   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:11.351917   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:11.419047   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:11.435699   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:11.435794   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.435828   10816 retry.go:31] will retry after 2.202073619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.493635   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:11.497994   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.498062   10816 retry.go:31] will retry after 2.124694715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.883396   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:11.883898   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:11.886348   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:12.886583   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:12.886583   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:12.889839   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:13.629430   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:13.643127   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:13.719202   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:13.719202   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 retry.go:31] will retry after 3.773255134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:13.719202   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 retry.go:31] will retry after 2.024299182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.890150   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:13.890150   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:13.893004   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:14.893300   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:14.893707   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:14.896357   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:15.748924   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:15.832154   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:15.836153   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:15.836153   10816 retry.go:31] will retry after 4.710098408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:15.897470   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:15.897470   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:15.900560   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:16.900812   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:16.900812   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:16.904208   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:17.498553   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:17.582081   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:17.582134   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:17.582134   10816 retry.go:31] will retry after 4.959220117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:17.904607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:17.904607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:17.907482   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:17.907482   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:17.907482   10816 type.go:168] "Request Body" body=""
	I1216 04:54:17.907482   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:17.910186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:18.910930   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:18.910930   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:18.913636   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:19.913975   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:19.913975   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:19.917442   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:20.551463   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:20.635939   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:20.635939   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:20.635939   10816 retry.go:31] will retry after 7.302087091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:20.917543   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:20.917543   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:20.922152   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:21.922714   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:21.923090   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:21.925451   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:22.546716   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:22.623025   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:22.626750   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:22.626750   10816 retry.go:31] will retry after 6.831180284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:22.925790   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:22.925790   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:22.929352   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:23.930014   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:23.930092   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:23.932838   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:24.933846   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:24.934195   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:24.936622   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:25.937442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:25.937516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:25.940094   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:26.940283   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:26.940283   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:26.943747   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:27.943504   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:27.945094   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:27.945165   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:27.947573   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:27.947626   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:27.947734   10816 type.go:168] "Request Body" body=""
	I1216 04:54:27.947766   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:27.950140   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:28.023100   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:28.027085   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:28.027085   10816 retry.go:31] will retry after 8.693676062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:28.950523   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:28.950523   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:28.955399   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:29.463172   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:29.548936   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:29.548936   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:29.551954   10816 retry.go:31] will retry after 8.541447036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:29.956404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:29.956404   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:29.959065   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:30.959708   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:30.959708   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:30.963012   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:31.964093   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:31.964093   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:31.967555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:32.968057   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:32.968057   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:32.970609   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:33.971778   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:33.971778   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:33.975447   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:34.975764   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:34.975764   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:34.980867   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:54:35.981702   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:35.981702   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:35.985092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:36.726019   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:36.801339   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:36.806868   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:36.806868   10816 retry.go:31] will retry after 11.085665292s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:36.986076   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:36.986076   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:36.989365   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:37.990461   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:37.990461   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:37.994420   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:54:37.994494   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:37.994613   10816 type.go:168] "Request Body" body=""
	I1216 04:54:37.994697   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:37.996806   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:38.098931   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:38.175856   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:38.181908   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:38.181908   10816 retry.go:31] will retry after 20.635277746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:38.997597   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:38.997597   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:39.000931   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:40.001375   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:40.001375   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:40.004974   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:41.005192   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:41.005192   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:41.007919   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:42.009105   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:42.009105   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:42.012612   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:43.013312   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:43.013312   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:43.016575   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:44.017297   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:44.017297   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:44.020296   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:45.020698   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:45.020698   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:45.023875   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:46.024607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:46.024607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:46.027947   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:47.028088   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:47.028746   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:47.032023   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:47.898206   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:47.976246   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:47.980090   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:47.980090   10816 retry.go:31] will retry after 12.179357603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:48.033037   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:48.033037   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:48.035808   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:48.035808   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:48.035808   10816 type.go:168] "Request Body" body=""
	I1216 04:54:48.035808   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:48.040977   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:54:49.041226   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:49.041572   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:49.043632   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:50.044672   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:50.044672   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:50.048807   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:51.049032   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:51.049032   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:51.051895   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:52.052810   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:52.052810   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:52.056184   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:53.056422   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:53.056422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:53.059030   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:54.059750   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:54.060113   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:54.063020   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:55.063099   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:55.063099   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:55.066474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:56.066822   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:56.066822   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:56.071205   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:57.071421   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:57.071421   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:57.073734   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:58.073939   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:58.073939   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:58.076906   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:58.076906   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:58.076906   10816 type.go:168] "Request Body" body=""
	I1216 04:54:58.076906   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:58.081072   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:58.823241   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:58.903750   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:58.908134   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:58.908134   10816 retry.go:31] will retry after 21.057070222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:59.081704   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:59.082161   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:59.085119   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:00.085233   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:00.085233   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:00.088190   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:00.165511   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:55:00.236692   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:00.240478   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:00.240478   10816 retry.go:31] will retry after 25.698880398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:01.089206   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:01.089206   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:01.093274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:02.094123   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:02.094422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:02.097156   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:03.098295   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:03.098295   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:03.102257   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:04.103035   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:04.103035   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:04.106884   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:05.107465   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:05.107465   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:05.110542   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:06.112033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:06.112033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:06.114883   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:07.115061   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:07.115061   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:07.118200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:08.119287   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:08.119622   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:08.122289   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:55:08.122330   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:08.122429   10816 type.go:168] "Request Body" body=""
	I1216 04:55:08.122520   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:08.125754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:09.126342   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:09.126818   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:09.129086   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:10.129383   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:10.129722   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:10.133200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:11.134173   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:11.134173   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:11.136746   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:12.137338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:12.137338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:12.140387   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:13.140819   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:13.140819   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:13.144315   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:14.144624   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:14.144624   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:14.146619   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:15.148016   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:15.148016   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:15.150667   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:16.151188   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:16.151188   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:16.154512   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:17.154762   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:17.154762   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:17.157863   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:18.158498   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:18.158835   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:18.161129   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:55:18.161129   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:18.161666   10816 type.go:168] "Request Body" body=""
	I1216 04:55:18.161765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:18.165763   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:19.166375   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:19.166948   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:19.170530   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:19.970281   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:55:20.048987   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:20.052948   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:20.052948   10816 retry.go:31] will retry after 40.980819462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:20.171417   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:20.171417   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:20.174285   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:21.174459   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:21.174459   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:21.178349   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:22.178639   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:22.178639   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:22.182103   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:23.182373   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:23.182373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:23.186196   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:24.187572   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:24.187572   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:24.190721   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:25.191259   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:25.191259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:25.193863   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:25.945563   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:55:26.023336   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:26.027981   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:26.027981   10816 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:55:26.194033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:26.194033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:26.196611   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:27.198100   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:27.198100   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:27.201373   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:28.202260   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:28.202336   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:28.205520   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:28.205520   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:28.205520   10816 type.go:168] "Request Body" body=""
	I1216 04:55:28.205520   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:28.207479   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:29.208141   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:29.208141   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:29.210912   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:30.211277   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:30.211277   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:30.215183   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:31.215597   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:31.216087   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:31.220042   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:32.220845   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:32.220845   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:32.224468   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:33.225011   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:33.225011   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:33.227593   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:34.228072   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:34.228072   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:34.232200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:35.233142   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:35.233142   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:35.236555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:36.236770   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:36.236770   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:36.239805   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:37.240445   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:37.240445   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:37.244092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:38.245044   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:38.245410   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:38.248594   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:38.248691   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:38.248769   10816 type.go:168] "Request Body" body=""
	I1216 04:55:38.248876   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:38.250514   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:39.251245   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:39.251245   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:39.254671   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:40.255034   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:40.255034   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:40.258153   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:41.259367   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:41.259367   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:41.262425   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:42.263082   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:42.263082   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:42.266116   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:43.266829   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:43.266829   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:43.270506   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:44.270759   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:44.270759   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:44.273660   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:45.274478   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:45.274478   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:45.278771   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:46.279173   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:46.279173   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:46.282053   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:47.282933   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:47.283421   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:47.285798   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:48.286808   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:48.286808   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:48.289962   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:48.289962   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:48.289962   10816 type.go:168] "Request Body" body=""
	I1216 04:55:48.290487   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:48.292914   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:49.293355   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:49.293355   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:49.296159   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:50.296781   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:50.296781   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:50.300274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:51.301342   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:51.301765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:51.304219   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:52.305071   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:52.305533   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:52.309249   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:53.309491   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:53.309873   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:53.312736   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:54.313186   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:54.313186   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:54.315728   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:55.316291   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:55.316291   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:55.318644   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:56.319270   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:56.319270   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:56.322306   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:57.322583   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:57.322583   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:57.325852   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:58.326685   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:58.326685   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:58.330655   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:58.330655   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:58.330655   10816 type.go:168] "Request Body" body=""
	I1216 04:55:58.330655   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:58.332638   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:59.333608   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:59.333608   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:59.337440   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:00.338469   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:00.338469   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:00.342007   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:01.039745   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:56:01.115386   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:56:01.115386   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:56:01.115924   10816 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:56:01.120162   10816 out.go:179] * Enabled addons: 
	I1216 04:56:01.123251   10816 addons.go:530] duration metric: took 1m53.6807689s for enable addons: enabled=[]
	I1216 04:56:01.342137   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:01.342137   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:01.346975   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:02.347223   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:02.347223   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:02.350951   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:03.351725   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:03.351725   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:03.355059   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:04.356296   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:04.356615   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:04.358992   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:05.359518   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:05.359518   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:05.362516   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:06.363038   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:06.363038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:06.366125   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:07.367111   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:07.367481   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:07.371966   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:08.372166   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:08.372166   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:08.375468   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:08.375993   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:08.376095   10816 type.go:168] "Request Body" body=""
	I1216 04:56:08.376172   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:08.378089   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:56:09.378463   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:09.378463   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:09.381670   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:10.382441   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:10.382810   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:10.385502   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:11.386065   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:11.386065   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:11.389374   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:12.389965   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:12.390333   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:12.393342   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:13.393761   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:13.393761   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:13.397642   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:14.398827   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:14.399038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:14.401820   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:15.402491   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:15.402491   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:15.406054   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:16.406137   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:16.406137   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:16.409329   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:17.410259   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:17.410259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:17.414120   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:18.414404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:18.414404   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:18.417494   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:18.417494   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:18.417494   10816 type.go:168] "Request Body" body=""
	I1216 04:56:18.417494   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:18.420441   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:19.421425   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:19.421425   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:19.424513   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:20.425579   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:20.425579   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:20.428886   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:21.429285   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:21.429285   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:21.433045   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:22.433638   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:22.433638   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:22.436697   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:23.437015   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:23.437015   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:23.439787   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:24.440703   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:24.440703   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:24.444019   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:25.444311   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:25.444311   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:25.447609   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:26.447984   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:26.448512   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:26.452794   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:27.453187   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:27.453187   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:27.455976   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:28.456871   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:28.456871   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:28.461251   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1216 04:56:28.461251   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:28.461251   10816 type.go:168] "Request Body" body=""
	I1216 04:56:28.461251   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:28.463526   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:29.463858   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:29.464259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:29.466878   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:30.467194   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:30.467194   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:30.470413   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:31.471156   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:31.471156   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:31.474353   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:32.475039   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:32.475637   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:32.478555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:33.479704   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:33.479704   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:33.483474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:34.483723   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:34.483723   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:34.486979   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:35.487257   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:35.487257   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:35.491469   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:36.492018   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:36.492018   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:36.495190   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:37.495789   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:37.495789   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:37.500106   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:38.500394   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:38.500394   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:38.503378   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1216 04:56:38.503378   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:38.503599   10816 type.go:168] "Request Body" body=""
	I1216 04:56:38.503670   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:38.505160   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:56:39.506481   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:39.506804   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:39.510121   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:40.511348   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:40.511515   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:40.513938   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:41.514571   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:41.514571   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:41.517965   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:42.518471   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:42.518471   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:42.521751   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:43.521949   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:43.521949   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:43.525274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:44.525475   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:44.525475   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:44.529537   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:45.530250   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:45.530521   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:45.533288   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:46.533897   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:46.533897   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:46.537801   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:47.538390   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:47.538390   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:47.541816   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:48.542450   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:48.542450   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:48.546099   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:48.546175   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:48.546220   10816 type.go:168] "Request Body" body=""
	I1216 04:56:48.546387   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:48.549486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:49.549740   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:49.549740   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:49.552741   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:50.552975   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:50.552975   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:50.555719   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:51.556671   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:51.557087   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:51.559469   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:52.560456   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:52.560456   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:52.562873   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:53.564181   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:53.564582   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:53.567897   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:54.568380   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:54.568380   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:54.571311   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:55.571743   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:55.572099   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:55.575412   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:56.575643   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:56.575643   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:56.578246   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:57.579469   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:57.579837   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:57.582643   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:58.583174   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:58.583174   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:58.586391   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:58.586391   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:58.586391   10816 type.go:168] "Request Body" body=""
	I1216 04:56:58.586391   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:58.589558   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:59.589768   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:59.589768   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:59.592754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:00.593373   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:00.593373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:00.596016   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:01.596725   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:01.596725   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:01.600189   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:02.600353   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:02.600353   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:02.603717   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:03.604325   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:03.604325   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:03.607595   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:04.607869   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:04.607869   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:04.611932   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:05.612128   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:05.612128   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:05.615243   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:06.616295   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:06.616781   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:06.619760   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:07.620272   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:07.620272   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:07.623644   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:08.623726   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:08.624232   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:08.626961   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:57:08.626961   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:08.626961   10816 type.go:168] "Request Body" body=""
	I1216 04:57:08.626961   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:08.629859   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:09.630419   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:09.630419   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:09.633878   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:10.634244   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:10.634244   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:10.637456   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:11.637797   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:11.637797   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:11.641669   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:12.642380   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:12.642380   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:12.644941   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:13.645547   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:13.645547   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:13.649321   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:14.649513   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:14.649513   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:14.652510   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:15.652980   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:15.652980   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:15.656319   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:16.656586   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:16.656586   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:16.659754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:17.659826   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:17.659826   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:17.663603   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:18.664062   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:18.664062   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:18.667107   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:18.667107   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:18.667107   10816 type.go:168] "Request Body" body=""
	I1216 04:57:18.667107   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:18.669486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:19.670016   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:19.670016   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:19.672638   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:20.673464   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:20.673464   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:20.677620   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:21.678112   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:21.678112   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:21.681513   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:22.681689   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:22.681995   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:22.685092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:23.685629   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:23.685980   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:23.689156   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:24.689510   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:24.689510   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:24.692985   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:25.693807   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:25.693807   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:25.697191   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:26.697691   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:26.697691   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:26.701914   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:27.702516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:27.702516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:27.705661   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:28.706672   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:28.706672   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:28.709206   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:57:28.709740   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:28.709807   10816 type.go:168] "Request Body" body=""
	I1216 04:57:28.709807   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:28.711563   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:57:29.711944   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:29.712335   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:29.715833   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:30.716017   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:30.716017   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:30.718719   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:31.719441   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:31.719441   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:31.722783   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:32.722947   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:32.723366   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:32.726287   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:33.726757   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:33.726757   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:33.730225   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:34.730767   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:34.730767   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:34.734197   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:35.734516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:35.734516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:35.738082   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:36.738414   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:36.738414   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:36.741636   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:37.742028   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:37.742028   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:37.745720   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:38.746648   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:38.746648   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:38.750213   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:38.750735   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:38.750811   10816 type.go:168] "Request Body" body=""
	I1216 04:57:38.750811   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:38.753365   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:39.754170   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:39.754494   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:39.756672   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:40.757075   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:40.757075   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:40.760090   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:41.761085   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:41.761085   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:41.764167   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:42.764607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:42.764607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:42.767925   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:43.768223   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:43.768223   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:43.771724   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:44.772020   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:44.772318   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:44.775672   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:45.776480   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:45.776480   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:45.778942   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:46.779437   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:46.779437   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:46.782462   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:47.783516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:47.783516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:47.786792   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:48.787104   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:48.787104   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:48.790218   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:48.790218   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:48.790333   10816 type.go:168] "Request Body" body=""
	I1216 04:57:48.790436   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:48.792857   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:49.793117   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:49.793422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:49.796265   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:50.797034   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:50.797034   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:50.800135   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:51.800692   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:51.800692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:51.803658   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:52.804509   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:52.804920   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:52.807718   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:53.808691   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:53.808691   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:53.811500   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:54.812293   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:54.812293   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:54.815510   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:55.815794   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:55.815794   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:55.818451   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:56.819222   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:56.819222   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:56.822148   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:57.823367   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:57.823367   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:57.826238   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:58.827282   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:58.827282   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:58.831278   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:58.831278   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:58.831278   10816 type.go:168] "Request Body" body=""
	I1216 04:57:58.831278   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:58.834101   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:59.834865   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:59.834865   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:59.838005   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:00.838338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:00.838338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:00.842079   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:01.842320   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:01.842587   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:01.846536   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:02.846765   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:02.846765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:02.849370   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:03.850175   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:03.850175   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:03.853386   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:04.853868   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:04.854373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:04.857431   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:05.858201   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:05.858471   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:05.860804   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:06.862215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:06.862215   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:06.865083   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:07.865404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:07.865848   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:07.868243   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:08.868442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:08.868783   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:08.871646   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:08.871738   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:08.871913   10816 type.go:168] "Request Body" body=""
	I1216 04:58:08.872023   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:08.874694   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:09.875136   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:09.875136   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:09.878881   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:10.879915   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:10.880365   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:10.883263   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:11.883912   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:11.883912   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:11.887249   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:12.888328   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:12.888328   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:12.891295   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:13.891657   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:13.891657   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:13.895474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:14.896600   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:14.896600   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:14.900025   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:15.900244   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:15.900674   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:15.903477   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:16.903646   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:16.904044   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:16.906787   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:17.907771   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:17.908158   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:17.910577   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:18.911153   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:18.911153   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:18.914890   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:58:18.914948   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:18.914948   10816 type.go:168] "Request Body" body=""
	I1216 04:58:18.914948   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:18.917403   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:19.918088   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:19.918527   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:19.921232   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:20.921801   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:20.921801   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:20.925689   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:21.925981   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:21.925981   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:21.929421   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:22.929692   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:22.929692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:22.934085   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:23.934312   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:23.934757   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:23.937761   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:24.938769   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:24.939209   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:24.942444   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:25.943100   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:25.943100   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:25.945226   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:26.945701   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:26.946109   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:26.947829   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:27.948365   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:27.948365   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:27.951830   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:28.952454   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:28.952454   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:28.956623   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1216 04:58:28.956759   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:28.956909   10816 type.go:168] "Request Body" body=""
	I1216 04:58:28.956990   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:28.959476   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:29.960256   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:29.960546   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:29.963746   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:30.964110   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:30.964110   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:30.967396   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:31.967947   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:31.967947   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:31.971619   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:32.972256   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:32.972256   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:32.975092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:33.975992   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:33.975992   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:33.979330   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:34.979792   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:34.980275   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:34.985587   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:58:35.985861   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:35.985861   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:35.988919   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:36.989563   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:36.989563   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:36.993055   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:37.993776   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:37.993776   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:37.997175   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:38.998214   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:38.998214   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:39.001897   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:58:39.001897   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:39.001897   10816 type.go:168] "Request Body" body=""
	I1216 04:58:39.001897   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:39.006108   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:40.006288   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:40.006288   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:40.009323   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:41.009760   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:41.009760   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:41.013530   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:42.013827   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:42.013827   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:42.017014   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:43.018254   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:43.018254   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:43.020804   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:44.021283   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:44.021578   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:44.025175   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:45.025733   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:45.026038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:45.028762   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:46.029139   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:46.029139   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:46.032822   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:47.033121   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:47.033121   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:47.036186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:48.037338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:48.037338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:48.041634   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:49.041943   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:49.041943   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:49.044552   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:49.044552   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:49.045136   10816 type.go:168] "Request Body" body=""
	I1216 04:58:49.045179   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:49.047881   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:50.048858   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:50.049289   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:50.052681   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:51.053215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:51.053675   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:51.055662   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:52.056918   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:52.056918   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:52.060467   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:53.061555   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:53.061992   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:53.063425   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:54.065095   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:54.065095   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:54.067617   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:55.068285   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:55.068285   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:55.071811   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:56.072296   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:56.072296   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:56.074442   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:57.075200   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:57.075200   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:57.078550   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:58.079588   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:58.079588   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:58.082364   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:59.083252   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:59.083252   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:59.085627   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:59.085627   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:59.085627   10816 type.go:168] "Request Body" body=""
	I1216 04:58:59.085627   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:59.088880   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:00.089932   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:00.090292   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:00.093204   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:01.093501   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:01.093501   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:01.096419   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:02.096985   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:02.096985   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:02.099764   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:03.100341   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:03.100341   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:03.103928   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:04.103977   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:04.103977   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:04.107337   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:05.108232   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:05.108232   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:05.110967   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:06.112125   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:06.112125   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:06.115328   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:07.115765   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:07.115765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:07.119250   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:08.119457   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:08.119457   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:08.122449   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:09.122631   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:09.122631   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:09.125978   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:09.126506   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:09.126611   10816 type.go:168] "Request Body" body=""
	I1216 04:59:09.126692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:09.128714   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:10.129007   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:10.129007   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:10.132112   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:11.132462   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:11.132909   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:11.135945   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:12.136431   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:12.136431   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:12.139277   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:13.140319   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:13.140319   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:13.143791   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:14.144673   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:14.144969   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:14.147133   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:15.148066   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:15.148066   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:15.151666   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:16.152576   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:16.152576   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:16.155181   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:17.155710   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:17.155710   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:17.158668   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:18.159541   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:18.159541   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:18.163278   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:19.163911   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:19.163911   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:19.167509   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:19.167509   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:19.167509   10816 type.go:168] "Request Body" body=""
	I1216 04:59:19.167509   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:19.170448   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:20.170687   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:20.170687   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:20.173841   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:21.174586   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:21.174671   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:21.177173   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:22.177927   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:22.177927   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:22.181163   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:23.181445   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:23.181445   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:23.184486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:24.184984   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:24.184984   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:24.188169   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:25.189332   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:25.189332   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:25.192735   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:26.193626   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:26.193973   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:26.198186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:27.198396   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:27.198396   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:27.201696   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:28.202442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:28.202442   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:28.205986   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:29.206746   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:29.207127   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:29.209566   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:59:29.209566   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:29.209566   10816 type.go:168] "Request Body" body=""
	I1216 04:59:29.210103   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:29.212125   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:30.212524   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:30.212524   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:30.215655   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:31.216215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:31.216215   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:31.219690   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:32.220046   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:32.220046   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:32.223009   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:33.223314   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:33.223314   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:33.227018   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:34.227625   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:34.227625   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:34.230861   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:35.230966   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:35.230966   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:35.233871   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:36.234450   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:36.234450   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:36.238041   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:37.238279   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:37.238279   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:37.242076   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:38.242327   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:38.242667   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:38.244855   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:39.245186   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:39.245186   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:39.248453   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:39.248453   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:39.248453   10816 type.go:168] "Request Body" body=""
	I1216 04:59:39.248453   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:39.251221   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:40.252169   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:40.252169   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:40.255087   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:41.255519   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:41.255519   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:41.258620   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:42.258899   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:42.258899   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:42.262729   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:43.262828   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:43.263200   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:43.266061   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:44.266376   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:44.266376   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:44.269929   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:45.270664   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:45.270664   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:45.273706   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:46.274385   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:46.274490   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:46.277222   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:47.277605   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:47.277605   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:47.280855   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:48.281379   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:48.281379   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:48.284989   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:49.285064   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:49.285064   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:49.288248   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:49.288292   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:49.288292   10816 type.go:168] "Request Body" body=""
	I1216 04:59:49.288292   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:49.290985   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:50.292197   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:50.292197   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:50.295316   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:51.295720   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:51.295720   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:51.299727   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:59:52.299933   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:52.300336   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:52.302657   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:53.303447   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:53.303447   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:53.306915   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:54.307348   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:54.307348   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:54.311155   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:55.311730   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:55.311730   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:55.315225   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:56.315472   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:56.315472   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:56.318408   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:57.319302   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:57.319302   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:57.322311   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:58.323301   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:58.323301   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:58.326036   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:59.326779   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:59.327147   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:59.330755   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:59.330828   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:59.330946   10816 type.go:168] "Request Body" body=""
	I1216 04:59:59.331049   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:59.334070   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:00.334751   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:00.335172   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:00.337839   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:01.338521   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:01.338521   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:01.341452   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:02.342326   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:02.342746   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:02.345360   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:03.346006   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:03.346006   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:03.349240   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:04.349594   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:04.349594   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:04.352907   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:05.354033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:05.354033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:05.357772   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:06.357911   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:06.358319   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:06.360594   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:07.361136   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:07.361136   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:07.364543   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 05:00:07.871664   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 05:00:07.871664   10816 node_ready.go:38] duration metric: took 6m0.0002013s for node "functional-002200" to be "Ready" ...
	I1216 05:00:07.876577   10816 out.go:203] 
	W1216 05:00:07.879616   10816 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 05:00:07.879616   10816 out.go:285] * 
	W1216 05:00:07.881276   10816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:00:07.884672   10816 out.go:203] 
	
	
	==> Docker <==
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532904868Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532910769Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532962273Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.533000176Z" level=info msg="Initializing buildkit"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.632934284Z" level=info msg="Completed buildkit initialization"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638730325Z" level=info msg="Daemon has completed initialization"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638930540Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 04:54:04 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638973643Z" level=info msg="API listen on [::]:2376"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638987344Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 04:54:04 functional-002200 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 04:54:04 functional-002200 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 16 04:54:04 functional-002200 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 04:54:05 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Loaded network plugin cni"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 04:54:05 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:01:04.272622   18465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:01:04.273330   18465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:01:04.276391   18465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:01:04.277935   18465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:01:04.278752   18465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001061] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001041] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001007] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000838] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001072] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 04:54] CPU: 8 PID: 53756 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001140] RIP: 0033:0x7f1fa5473b20
	[  +0.000543] Code: Unable to access opcode bytes at RIP 0x7f1fa5473af6.
	[  +0.001042] RSP: 002b:00007ffde8c4f290 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000944] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001046] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000944] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001149] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000795] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000802] FS:  0000000000000000 GS:  0000000000000000
	[  +0.814553] CPU: 10 PID: 53882 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000797] RIP: 0033:0x7f498f339b20
	[  +0.000391] Code: Unable to access opcode bytes at RIP 0x7f498f339af6.
	[  +0.000625] RSP: 002b:00007ffc77d465d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000824] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000761] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000760] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000766] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000779] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000780] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:01:04 up 37 min,  0 user,  load average: 0.21, 0.35, 0.56
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:01:01 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:01:02 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 888.
	Dec 16 05:01:02 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:01:02 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:01:02 functional-002200 kubelet[18322]: E1216 05:01:02.090119   18322 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:01:02 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:01:02 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:01:02 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 889.
	Dec 16 05:01:02 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:01:02 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:01:02 functional-002200 kubelet[18333]: E1216 05:01:02.874955   18333 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:01:02 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:01:02 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:01:03 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 890.
	Dec 16 05:01:03 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:01:03 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:01:03 functional-002200 kubelet[18361]: E1216 05:01:03.606047   18361 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:01:03 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:01:03 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:01:04 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 891.
	Dec 16 05:01:04 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:01:04 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:01:04 functional-002200 kubelet[18474]: E1216 05:01:04.359340   18474 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:01:04 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:01:04 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (583.2896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (53.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (53.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 kubectl -- --context functional-002200 get pods
E1216 05:02:01.797948   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:731: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 kubectl -- --context functional-002200 get pods: exit status 1 (50.5911728s)

                                                
                                                
** stderr ** 
	E1216 05:01:35.316618   13936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:01:45.408578   13936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:01:55.448948   13936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:02:05.490085   13936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:02:15.532951   13936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-002200 kubectl -- --context functional-002200 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 2 (580.5274ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.1858447s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-902700 ssh pgrep buildkitd                                                                                   │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image   │ functional-902700 image build -t localhost/my-image:functional-902700 testdata\build --alsologtostderr                  │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ service │ functional-902700 service hello-node --url                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image   │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format yaml --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format table --alsologtostderr                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format json --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ delete  │ -p functional-902700                                                                                                    │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │ 16 Dec 25 04:45 UTC │
	│ start   │ -p functional-002200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │                     │
	│ start   │ -p functional-002200 --alsologtostderr -v=8                                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:53 UTC │                     │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:3.1                                                                   │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:3.3                                                                   │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:latest                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add minikube-local-cache-test:functional-002200                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache delete minikube-local-cache-test:functional-002200                                              │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl images                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │                     │
	│ cache   │ functional-002200 cache reload                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ kubectl │ functional-002200 kubectl -- --context functional-002200 get pods                                                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:53:59
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:53:59.077529   10816 out.go:360] Setting OutFile to fd 1388 ...
	I1216 04:53:59.120079   10816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:53:59.120079   10816 out.go:374] Setting ErrFile to fd 1504...
	I1216 04:53:59.120079   10816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:53:59.134125   10816 out.go:368] Setting JSON to false
	I1216 04:53:59.136333   10816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1860,"bootTime":1765858978,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 04:53:59.136333   10816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 04:53:59.140588   10816 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 04:53:59.143257   10816 notify.go:221] Checking for updates...
	I1216 04:53:59.144338   10816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:53:59.146335   10816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:53:59.148852   10816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 04:53:59.153389   10816 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:53:59.155692   10816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:53:59.158810   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:53:59.158810   10816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:53:59.271386   10816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 04:53:59.275857   10816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:53:59.515409   10816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 04:53:59.497557869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:53:59.520423   10816 out.go:179] * Using the docker driver based on existing profile
	I1216 04:53:59.523406   10816 start.go:309] selected driver: docker
	I1216 04:53:59.523406   10816 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:53:59.523406   10816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:53:59.529406   10816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:53:59.757949   10816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 04:53:59.738153267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:53:59.838476   10816 cni.go:84] Creating CNI manager for ""
	I1216 04:53:59.838476   10816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:53:59.838997   10816 start.go:353] cluster config:
	{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:53:59.842569   10816 out.go:179] * Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	I1216 04:53:59.844586   10816 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 04:53:59.847541   10816 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:53:59.850024   10816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:53:59.850024   10816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:53:59.850184   10816 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 04:53:59.850253   10816 cache.go:65] Caching tarball of preloaded images
	I1216 04:53:59.850408   10816 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 04:53:59.850408   10816 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 04:53:59.850408   10816 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json ...
	I1216 04:53:59.925943   10816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 04:53:59.925943   10816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 04:53:59.926465   10816 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:53:59.926540   10816 start.go:360] acquireMachinesLock for functional-002200: {Name:mk1997a1c039b59133e065727b36e0e15b10eb3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:53:59.926717   10816 start.go:364] duration metric: took 124.8µs to acquireMachinesLock for "functional-002200"
	I1216 04:53:59.926803   10816 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:53:59.926803   10816 fix.go:54] fixHost starting: 
	I1216 04:53:59.933877   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:53:59.985861   10816 fix.go:112] recreateIfNeeded on functional-002200: state=Running err=<nil>
	W1216 04:53:59.986777   10816 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:53:59.990712   10816 out.go:252] * Updating the running docker "functional-002200" container ...
	I1216 04:53:59.990712   10816 machine.go:94] provisionDockerMachine start ...
	I1216 04:53:59.994611   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.050133   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.050702   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.050702   10816 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:54:00.224414   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:54:00.224414   10816 ubuntu.go:182] provisioning hostname "functional-002200"
	I1216 04:54:00.228183   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.284942   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.285440   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.285501   10816 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-002200 && echo "functional-002200" | sudo tee /etc/hostname
	I1216 04:54:00.466400   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:54:00.469396   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.520394   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.520394   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.521395   10816 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-002200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-002200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-002200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:54:00.690074   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:54:00.690074   10816 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 04:54:00.690074   10816 ubuntu.go:190] setting up certificates
	I1216 04:54:00.690074   10816 provision.go:84] configureAuth start
	I1216 04:54:00.694148   10816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:54:00.751989   10816 provision.go:143] copyHostCerts
	I1216 04:54:00.752186   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1216 04:54:00.752528   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 04:54:00.752557   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 04:54:00.752557   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 04:54:00.753298   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1216 04:54:00.753298   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 04:54:00.753298   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 04:54:00.754021   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 04:54:00.754554   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1216 04:54:00.754554   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 04:54:00.754554   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 04:54:00.755135   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 04:54:00.755694   10816 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-002200 san=[127.0.0.1 192.168.49.2 functional-002200 localhost minikube]
	I1216 04:54:00.834817   10816 provision.go:177] copyRemoteCerts
	I1216 04:54:00.838808   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:54:00.841808   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.896045   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:01.027660   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1216 04:54:01.027660   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:54:01.054957   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1216 04:54:01.054957   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:54:01.077598   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1216 04:54:01.077598   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:54:01.104237   10816 provision.go:87] duration metric: took 414.1604ms to configureAuth
	I1216 04:54:01.104237   10816 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:54:01.105157   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:54:01.110636   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.168864   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.169525   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.169551   10816 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 04:54:01.355861   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 04:54:01.355861   10816 ubuntu.go:71] root file system type: overlay
	I1216 04:54:01.355861   10816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 04:54:01.359632   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.417983   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.418643   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.418643   10816 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 04:54:01.607477   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 04:54:01.611072   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.665669   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.666241   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.666241   10816 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 04:54:01.838018   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:54:01.838065   10816 machine.go:97] duration metric: took 1.8473421s to provisionDockerMachine
	I1216 04:54:01.838112   10816 start.go:293] postStartSetup for "functional-002200" (driver="docker")
	I1216 04:54:01.838112   10816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:54:01.842730   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:54:01.845927   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.899710   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.030948   10816 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:54:02.037585   10816 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 04:54:02.037585   10816 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION_ID="12"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 04:54:02.037585   10816 command_runner.go:130] > ID=debian
	I1216 04:54:02.037585   10816 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 04:54:02.037585   10816 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 04:54:02.037585   10816 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 04:54:02.037585   10816 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:54:02.037585   10816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:54:02.037585   10816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 04:54:02.037585   10816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 04:54:02.038695   10816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 04:54:02.038739   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> /etc/ssl/certs/117042.pem
	I1216 04:54:02.039358   10816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> hosts in /etc/test/nested/copy/11704
	I1216 04:54:02.039390   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> /etc/test/nested/copy/11704/hosts
	I1216 04:54:02.043645   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11704
	I1216 04:54:02.054687   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 04:54:02.077250   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts --> /etc/test/nested/copy/11704/hosts (40 bytes)
	I1216 04:54:02.106199   10816 start.go:296] duration metric: took 268.0858ms for postStartSetup
	I1216 04:54:02.110518   10816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:54:02.114167   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.171516   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.294935   10816 command_runner.go:130] > 1%
	I1216 04:54:02.299449   10816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:54:02.309560   10816 command_runner.go:130] > 950G
	I1216 04:54:02.309560   10816 fix.go:56] duration metric: took 2.3827424s for fixHost
	I1216 04:54:02.309560   10816 start.go:83] releasing machines lock for "functional-002200", held for 2.3828036s
	I1216 04:54:02.313570   10816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:54:02.366171   10816 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 04:54:02.371688   10816 ssh_runner.go:195] Run: cat /version.json
	I1216 04:54:02.371747   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.373884   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.425495   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.428440   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.530908   10816 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1216 04:54:02.530908   10816 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 04:54:02.552908   10816 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1216 04:54:02.557959   10816 ssh_runner.go:195] Run: systemctl --version
	I1216 04:54:02.566291   10816 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 04:54:02.566291   10816 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 04:54:02.571531   10816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 04:54:02.582535   10816 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 04:54:02.582535   10816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:54:02.587977   10816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:54:02.599631   10816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:54:02.599684   10816 start.go:496] detecting cgroup driver to use...
	I1216 04:54:02.599733   10816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:54:02.599952   10816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:54:02.620915   10816 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1216 04:54:02.625275   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:54:02.642513   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:54:02.658404   10816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:54:02.664249   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:54:02.683612   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:54:02.703566   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:54:02.723114   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:54:02.741121   10816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:54:02.760533   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	W1216 04:54:02.771378   10816 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 04:54:02.771378   10816 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 04:54:02.781609   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:54:02.800465   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:54:02.819380   10816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:54:02.832241   10816 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 04:54:02.836457   10816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:54:02.854943   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:02.994394   10816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:54:03.139472   10816 start.go:496] detecting cgroup driver to use...
	I1216 04:54:03.139472   10816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:54:03.143391   10816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 04:54:03.162559   10816 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1216 04:54:03.162559   10816 command_runner.go:130] > [Unit]
	I1216 04:54:03.162559   10816 command_runner.go:130] > Description=Docker Application Container Engine
	I1216 04:54:03.162647   10816 command_runner.go:130] > Documentation=https://docs.docker.com
	I1216 04:54:03.162647   10816 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1216 04:54:03.162647   10816 command_runner.go:130] > Wants=network-online.target containerd.service
	I1216 04:54:03.162647   10816 command_runner.go:130] > Requires=docker.socket
	I1216 04:54:03.162689   10816 command_runner.go:130] > StartLimitBurst=3
	I1216 04:54:03.162689   10816 command_runner.go:130] > StartLimitIntervalSec=60
	I1216 04:54:03.162734   10816 command_runner.go:130] > [Service]
	I1216 04:54:03.162734   10816 command_runner.go:130] > Type=notify
	I1216 04:54:03.162734   10816 command_runner.go:130] > Restart=always
	I1216 04:54:03.162734   10816 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1216 04:54:03.162807   10816 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1216 04:54:03.162828   10816 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1216 04:54:03.162828   10816 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1216 04:54:03.162828   10816 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1216 04:54:03.162900   10816 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1216 04:54:03.162917   10816 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1216 04:54:03.162917   10816 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1216 04:54:03.162917   10816 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1216 04:54:03.162917   10816 command_runner.go:130] > ExecStart=
	I1216 04:54:03.162980   10816 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1216 04:54:03.162980   10816 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1216 04:54:03.163008   10816 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1216 04:54:03.163008   10816 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitNOFILE=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitNPROC=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitCORE=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1216 04:54:03.163065   10816 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1216 04:54:03.163065   10816 command_runner.go:130] > TasksMax=infinity
	I1216 04:54:03.163065   10816 command_runner.go:130] > TimeoutStartSec=0
	I1216 04:54:03.163065   10816 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1216 04:54:03.163112   10816 command_runner.go:130] > Delegate=yes
	I1216 04:54:03.163112   10816 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1216 04:54:03.163112   10816 command_runner.go:130] > KillMode=process
	I1216 04:54:03.163112   10816 command_runner.go:130] > OOMScoreAdjust=-500
	I1216 04:54:03.163112   10816 command_runner.go:130] > [Install]
	I1216 04:54:03.163112   10816 command_runner.go:130] > WantedBy=multi-user.target
	I1216 04:54:03.167400   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:54:03.188934   10816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:54:03.279029   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:54:03.300208   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:54:03.316692   10816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:54:03.338834   10816 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1216 04:54:03.343609   10816 ssh_runner.go:195] Run: which cri-dockerd
	I1216 04:54:03.350066   10816 command_runner.go:130] > /usr/bin/cri-dockerd
	I1216 04:54:03.355212   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 04:54:03.369229   10816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 04:54:03.392646   10816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 04:54:03.524584   10816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 04:54:03.661458   10816 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 04:54:03.661598   10816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 04:54:03.685520   10816 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 04:54:03.708589   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:03.845683   10816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 04:54:04.645791   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:54:04.667182   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 04:54:04.690401   10816 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 04:54:04.718176   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:54:04.738992   10816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 04:54:04.903819   10816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 04:54:05.034592   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:05.166883   10816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 04:54:05.190738   10816 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 04:54:05.211273   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:05.344748   10816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 04:54:05.446097   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:54:05.463790   10816 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 04:54:05.471347   10816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 04:54:05.478565   10816 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1216 04:54:05.478565   10816 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 04:54:05.478565   10816 command_runner.go:130] > Device: 0,112	Inode: 1751        Links: 1
	I1216 04:54:05.478565   10816 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1216 04:54:05.478565   10816 command_runner.go:130] > Access: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] > Modify: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] > Change: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] >  Birth: -
	I1216 04:54:05.478565   10816 start.go:564] Will wait 60s for crictl version
	I1216 04:54:05.482816   10816 ssh_runner.go:195] Run: which crictl
	I1216 04:54:05.491459   10816 command_runner.go:130] > /usr/local/bin/crictl
	I1216 04:54:05.496033   10816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:54:05.533167   10816 command_runner.go:130] > Version:  0.1.0
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeName:  docker
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 04:54:05.533167   10816 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 04:54:05.536709   10816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:54:05.572362   10816 command_runner.go:130] > 29.1.3
	I1216 04:54:05.576856   10816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:54:05.612780   10816 command_runner.go:130] > 29.1.3
	I1216 04:54:05.616153   10816 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 04:54:05.619706   10816 cli_runner.go:164] Run: docker exec -t functional-002200 dig +short host.docker.internal
	I1216 04:54:05.740410   10816 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 04:54:05.744411   10816 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 04:54:05.751410   10816 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1216 04:54:05.754417   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:05.810199   10816 kubeadm.go:884] updating cluster {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:54:05.810199   10816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:54:05.814984   10816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1216 04:54:05.850393   10816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:05.850393   10816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:54:05.850393   10816 docker.go:621] Images already preloaded, skipping extraction
	I1216 04:54:05.852935   10816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1216 04:54:05.887286   10816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:05.887286   10816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:54:05.887286   10816 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:54:05.887286   10816 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1216 04:54:05.887286   10816 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-002200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:54:05.890789   10816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 04:54:05.960191   10816 command_runner.go:130] > cgroupfs
	I1216 04:54:05.960191   10816 cni.go:84] Creating CNI manager for ""
	I1216 04:54:05.960191   10816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:54:05.960191   10816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:54:05.960723   10816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-002200 NodeName:functional-002200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:54:05.960947   10816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-002200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:54:05.964962   10816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubeadm
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubectl
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubelet
	I1216 04:54:05.978770   10816 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:54:05.983615   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:54:05.994290   10816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 04:54:06.017936   10816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:54:06.036718   10816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1216 04:54:06.060901   10816 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:54:06.072426   10816 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 04:54:06.077308   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:06.213746   10816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:54:06.308797   10816 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200 for IP: 192.168.49.2
	I1216 04:54:06.308797   10816 certs.go:195] generating shared ca certs ...
	I1216 04:54:06.308797   10816 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:06.310511   10816 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 04:54:06.310511   10816 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 04:54:06.310511   10816 certs.go:257] generating profile certs ...
	I1216 04:54:06.311535   10816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key
	I1216 04:54:06.311853   10816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742
	I1216 04:54:06.312156   10816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key
	I1216 04:54:06.312187   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 04:54:06.312277   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1216 04:54:06.312360   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 04:54:06.312444   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 04:54:06.312580   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 04:54:06.312673   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 04:54:06.312777   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 04:54:06.312890   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 04:54:06.313261   10816 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 04:54:06.313921   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 04:54:06.314135   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 04:54:06.314531   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 04:54:06.314719   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.314759   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.314759   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem -> /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.315394   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:54:06.342547   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:54:06.368689   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:54:06.393638   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:54:06.418640   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:54:06.453759   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 04:54:06.476256   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:54:06.500532   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:54:06.524928   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 04:54:06.552508   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:54:06.575232   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 04:54:06.598894   10816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:54:06.620996   10816 ssh_runner.go:195] Run: openssl version
	I1216 04:54:06.631676   10816 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 04:54:06.636278   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.653246   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:54:06.670292   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.677576   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.677653   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.681684   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.724946   10816 command_runner.go:130] > b5213941
	I1216 04:54:06.729462   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:54:06.747149   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.764470   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 04:54:06.780610   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.787541   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.787541   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.791611   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.834505   10816 command_runner.go:130] > 51391683
	I1216 04:54:06.839668   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:54:06.856437   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.871735   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 04:54:06.888873   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.895775   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.895828   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.900176   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.943961   10816 command_runner.go:130] > 3ec20f2e
	I1216 04:54:06.948620   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:54:06.964812   10816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:54:06.978768   10816 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:54:06.978768   10816 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 04:54:06.978768   10816 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1216 04:54:06.978768   10816 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:54:06.978768   10816 command_runner.go:130] > Access: 2025-12-16 04:49:55.262290705 +0000
	I1216 04:54:06.978768   10816 command_runner.go:130] > Modify: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.978768   10816 command_runner.go:130] > Change: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.978868   10816 command_runner.go:130] >  Birth: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.982552   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:54:07.026352   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.030610   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:54:07.075026   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.079065   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:54:07.126638   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.131687   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:54:07.174667   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.179083   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:54:07.222822   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.227385   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:54:07.271975   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.271975   10816 kubeadm.go:401] StartCluster: {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:54:07.276330   10816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 04:54:07.308756   10816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:54:07.320226   10816 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 04:54:07.320260   10816 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 04:54:07.320260   10816 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 04:54:07.320341   10816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:54:07.320341   10816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:54:07.325132   10816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:54:07.336047   10816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:54:07.339740   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.398431   10816 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-002200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.399021   10816 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-002200" cluster setting kubeconfig missing "functional-002200" context setting]
	I1216 04:54:07.399534   10816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.418099   10816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.418579   10816 kapi.go:59] client config for functional-002200: &rest.Config{Host:"https://127.0.0.1:49316", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff78e429080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:54:07.419732   10816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 04:54:07.424264   10816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:54:07.438954   10816 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1216 04:54:07.439621   10816 kubeadm.go:602] duration metric: took 119.279ms to restartPrimaryControlPlane
	I1216 04:54:07.439621   10816 kubeadm.go:403] duration metric: took 167.6444ms to StartCluster
	I1216 04:54:07.439621   10816 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.439755   10816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.440821   10816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.441789   10816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 04:54:07.441839   10816 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:54:07.442048   10816 addons.go:70] Setting storage-provisioner=true in profile "functional-002200"
	I1216 04:54:07.442048   10816 addons.go:70] Setting default-storageclass=true in profile "functional-002200"
	I1216 04:54:07.442130   10816 addons.go:239] Setting addon storage-provisioner=true in "functional-002200"
	I1216 04:54:07.442130   10816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-002200"
	I1216 04:54:07.442187   10816 host.go:66] Checking if "functional-002200" exists ...
	I1216 04:54:07.442187   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:54:07.445437   10816 out.go:179] * Verifying Kubernetes components...
	I1216 04:54:07.450118   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.450857   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.452175   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:07.507771   10816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.508167   10816 kapi.go:59] client config for functional-002200: &rest.Config{Host:"https://127.0.0.1:49316", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff78e429080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:54:07.508951   10816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:54:07.508951   10816 addons.go:239] Setting addon default-storageclass=true in "functional-002200"
	I1216 04:54:07.508951   10816 host.go:66] Checking if "functional-002200" exists ...
	I1216 04:54:07.517556   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.537496   10816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:07.540287   10816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:07.540287   10816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:54:07.546774   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.582442   10816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:07.582442   10816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:54:07.586285   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.606994   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:07.636962   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:07.645869   10816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:54:07.765470   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:07.777346   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:07.811577   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.866167   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 node_ready.go:35] waiting up to 6m0s for node "functional-002200" to be "Ready" ...
	W1216 04:54:07.869156   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 retry.go:31] will retry after 143.37804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.870154   10816 type.go:168] "Request Body" body=""
	I1216 04:54:07.870154   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	W1216 04:54:07.870154   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.870154   10816 retry.go:31] will retry after 150.951622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.872075   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:54:08.018062   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:08.025836   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:08.095508   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.099951   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.099951   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.099951   10816 retry.go:31] will retry after 537.200798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.103237   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.103772   10816 retry.go:31] will retry after 434.961679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.544092   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:08.626905   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.632935   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.632935   10816 retry.go:31] will retry after 617.835459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.641591   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:08.717034   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.721285   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.721336   10816 retry.go:31] will retry after 555.435942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.872382   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:08.872382   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:08.874726   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:09.256223   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:09.281163   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:09.337874   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:09.342648   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.342648   10816 retry.go:31] will retry after 1.171657048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.351506   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:09.353684   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.353684   10816 retry.go:31] will retry after 716.560141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.875116   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:09.875116   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:09.878246   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:10.075942   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:10.149131   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:10.153724   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.153724   10816 retry.go:31] will retry after 1.192910832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.520957   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:10.596120   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:10.600356   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.600356   10816 retry.go:31] will retry after 814.376196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.878697   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:10.879061   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:10.882391   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:11.351917   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:11.419047   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:11.435699   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:11.435794   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.435828   10816 retry.go:31] will retry after 2.202073619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.493635   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:11.497994   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.498062   10816 retry.go:31] will retry after 2.124694715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.883396   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:11.883898   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:11.886348   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:12.886583   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:12.886583   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:12.889839   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:13.629430   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:13.643127   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:13.719202   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:13.719202   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 retry.go:31] will retry after 3.773255134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:13.719202   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 retry.go:31] will retry after 2.024299182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.890150   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:13.890150   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:13.893004   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:14.893300   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:14.893707   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:14.896357   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:15.748924   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:15.832154   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:15.836153   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:15.836153   10816 retry.go:31] will retry after 4.710098408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:15.897470   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:15.897470   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:15.900560   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:16.900812   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:16.900812   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:16.904208   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:17.498553   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:17.582081   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:17.582134   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:17.582134   10816 retry.go:31] will retry after 4.959220117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:17.904607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:17.904607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:17.907482   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:17.907482   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:17.907482   10816 type.go:168] "Request Body" body=""
	I1216 04:54:17.907482   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:17.910186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:18.910930   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:18.910930   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:18.913636   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:19.913975   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:19.913975   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:19.917442   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:20.551463   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:20.635939   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:20.635939   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:20.635939   10816 retry.go:31] will retry after 7.302087091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:20.917543   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:20.917543   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:20.922152   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:21.922714   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:21.923090   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:21.925451   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:22.546716   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:22.623025   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:22.626750   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:22.626750   10816 retry.go:31] will retry after 6.831180284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:22.925790   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:22.925790   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:22.929352   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:23.930014   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:23.930092   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:23.932838   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:24.933846   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:24.934195   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:24.936622   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:25.937442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:25.937516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:25.940094   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:26.940283   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:26.940283   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:26.943747   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:27.943504   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:27.945094   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:27.945165   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:27.947573   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:27.947626   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:27.947734   10816 type.go:168] "Request Body" body=""
	I1216 04:54:27.947766   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:27.950140   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:28.023100   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:28.027085   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:28.027085   10816 retry.go:31] will retry after 8.693676062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:28.950523   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:28.950523   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:28.955399   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:29.463172   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:29.548936   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:29.548936   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:29.551954   10816 retry.go:31] will retry after 8.541447036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:29.956404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:29.956404   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:29.959065   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:30.959708   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:30.959708   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:30.963012   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:31.964093   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:31.964093   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:31.967555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:32.968057   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:32.968057   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:32.970609   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:33.971778   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:33.971778   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:33.975447   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:34.975764   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:34.975764   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:34.980867   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:54:35.981702   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:35.981702   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:35.985092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:36.726019   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:36.801339   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:36.806868   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:36.806868   10816 retry.go:31] will retry after 11.085665292s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:36.986076   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:36.986076   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:36.989365   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:37.990461   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:37.990461   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:37.994420   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:54:37.994494   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:37.994613   10816 type.go:168] "Request Body" body=""
	I1216 04:54:37.994697   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:37.996806   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:38.098931   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:38.175856   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:38.181908   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:38.181908   10816 retry.go:31] will retry after 20.635277746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:38.997597   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:38.997597   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:39.000931   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:40.001375   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:40.001375   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:40.004974   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:41.005192   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:41.005192   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:41.007919   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:42.009105   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:42.009105   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:42.012612   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:43.013312   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:43.013312   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:43.016575   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:44.017297   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:44.017297   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:44.020296   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:45.020698   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:45.020698   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:45.023875   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:46.024607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:46.024607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:46.027947   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:47.028088   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:47.028746   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:47.032023   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:47.898206   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:47.976246   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:47.980090   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:47.980090   10816 retry.go:31] will retry after 12.179357603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:48.033037   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:48.033037   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:48.035808   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:48.035808   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:48.035808   10816 type.go:168] "Request Body" body=""
	I1216 04:54:48.035808   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:48.040977   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:54:49.041226   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:49.041572   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:49.043632   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:50.044672   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:50.044672   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:50.048807   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:51.049032   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:51.049032   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:51.051895   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:52.052810   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:52.052810   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:52.056184   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:53.056422   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:53.056422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:53.059030   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:54.059750   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:54.060113   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:54.063020   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:55.063099   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:55.063099   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:55.066474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:56.066822   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:56.066822   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:56.071205   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:57.071421   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:57.071421   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:57.073734   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:58.073939   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:58.073939   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:58.076906   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:58.076906   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:58.076906   10816 type.go:168] "Request Body" body=""
	I1216 04:54:58.076906   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:58.081072   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:58.823241   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:58.903750   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:58.908134   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:58.908134   10816 retry.go:31] will retry after 21.057070222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:59.081704   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:59.082161   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:59.085119   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:00.085233   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:00.085233   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:00.088190   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:00.165511   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:55:00.236692   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:00.240478   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:00.240478   10816 retry.go:31] will retry after 25.698880398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:01.089206   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:01.089206   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:01.093274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:02.094123   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:02.094422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:02.097156   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:03.098295   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:03.098295   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:03.102257   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:04.103035   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:04.103035   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:04.106884   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:05.107465   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:05.107465   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:05.110542   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:06.112033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:06.112033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:06.114883   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:07.115061   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:07.115061   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:07.118200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:08.119287   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:08.119622   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:08.122289   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:55:08.122330   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:08.122429   10816 type.go:168] "Request Body" body=""
	I1216 04:55:08.122520   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:08.125754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:09.126342   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:09.126818   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:09.129086   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:10.129383   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:10.129722   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:10.133200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:11.134173   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:11.134173   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:11.136746   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:12.137338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:12.137338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:12.140387   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:13.140819   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:13.140819   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:13.144315   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:14.144624   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:14.144624   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:14.146619   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:15.148016   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:15.148016   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:15.150667   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:16.151188   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:16.151188   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:16.154512   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:17.154762   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:17.154762   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:17.157863   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:18.158498   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:18.158835   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:18.161129   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:55:18.161129   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:18.161666   10816 type.go:168] "Request Body" body=""
	I1216 04:55:18.161765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:18.165763   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:19.166375   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:19.166948   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:19.170530   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:19.970281   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:55:20.048987   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:20.052948   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:20.052948   10816 retry.go:31] will retry after 40.980819462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:20.171417   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:20.171417   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:20.174285   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:21.174459   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:21.174459   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:21.178349   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:22.178639   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:22.178639   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:22.182103   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:23.182373   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:23.182373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:23.186196   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:24.187572   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:24.187572   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:24.190721   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:25.191259   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:25.191259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:25.193863   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:25.945563   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:55:26.023336   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:26.027981   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:26.027981   10816 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:55:26.194033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:26.194033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:26.196611   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:27.198100   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:27.198100   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:27.201373   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:28.202260   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:28.202336   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:28.205520   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:28.205520   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:28.205520   10816 type.go:168] "Request Body" body=""
	I1216 04:55:28.205520   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:28.207479   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:29.208141   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:29.208141   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:29.210912   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:30.211277   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:30.211277   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:30.215183   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:31.215597   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:31.216087   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:31.220042   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:32.220845   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:32.220845   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:32.224468   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:33.225011   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:33.225011   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:33.227593   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:34.228072   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:34.228072   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:34.232200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:35.233142   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:35.233142   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:35.236555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:36.236770   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:36.236770   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:36.239805   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:37.240445   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:37.240445   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:37.244092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:38.245044   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:38.245410   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:38.248594   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:38.248691   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:38.248769   10816 type.go:168] "Request Body" body=""
	I1216 04:55:38.248876   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:38.250514   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:39.251245   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:39.251245   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:39.254671   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:40.255034   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:40.255034   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:40.258153   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:41.259367   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:41.259367   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:41.262425   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:42.263082   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:42.263082   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:42.266116   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:43.266829   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:43.266829   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:43.270506   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:44.270759   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:44.270759   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:44.273660   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:45.274478   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:45.274478   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:45.278771   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:46.279173   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:46.279173   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:46.282053   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:47.282933   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:47.283421   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:47.285798   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:48.286808   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:48.286808   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:48.289962   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:48.289962   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:48.289962   10816 type.go:168] "Request Body" body=""
	I1216 04:55:48.290487   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:48.292914   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:49.293355   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:49.293355   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:49.296159   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:50.296781   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:50.296781   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:50.300274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:51.301342   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:51.301765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:51.304219   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:52.305071   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:52.305533   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:52.309249   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:53.309491   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:53.309873   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:53.312736   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:54.313186   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:54.313186   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:54.315728   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:55.316291   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:55.316291   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:55.318644   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:56.319270   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:56.319270   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:56.322306   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:57.322583   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:57.322583   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:57.325852   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:58.326685   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:58.326685   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:58.330655   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:58.330655   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:58.330655   10816 type.go:168] "Request Body" body=""
	I1216 04:55:58.330655   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:58.332638   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:59.333608   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:59.333608   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:59.337440   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:00.338469   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:00.338469   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:00.342007   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:01.039745   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:56:01.115386   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:56:01.115386   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:56:01.115924   10816 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:56:01.120162   10816 out.go:179] * Enabled addons: 
	I1216 04:56:01.123251   10816 addons.go:530] duration metric: took 1m53.6807689s for enable addons: enabled=[]
	I1216 04:56:01.342137   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:01.342137   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:01.346975   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:02.347223   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:02.347223   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:02.350951   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:03.351725   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:03.351725   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:03.355059   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:04.356296   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:04.356615   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:04.358992   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:05.359518   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:05.359518   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:05.362516   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:06.363038   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:06.363038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:06.366125   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:07.367111   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:07.367481   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:07.371966   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:08.372166   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:08.372166   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:08.375468   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:08.375993   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:08.376095   10816 type.go:168] "Request Body" body=""
	I1216 04:56:08.376172   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:08.378089   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:56:09.378463   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:09.378463   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:09.381670   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:10.382441   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:10.382810   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:10.385502   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:11.386065   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:11.386065   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:11.389374   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:12.389965   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:12.390333   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:12.393342   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:13.393761   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:13.393761   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:13.397642   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:14.398827   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:14.399038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:14.401820   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:15.402491   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:15.402491   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:15.406054   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:16.406137   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:16.406137   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:16.409329   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:17.410259   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:17.410259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:17.414120   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:18.414404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:18.414404   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:18.417494   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:18.417494   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:18.417494   10816 type.go:168] "Request Body" body=""
	I1216 04:56:18.417494   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:18.420441   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:19.421425   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:19.421425   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:19.424513   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:20.425579   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:20.425579   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:20.428886   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:21.429285   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:21.429285   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:21.433045   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:22.433638   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:22.433638   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:22.436697   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:23.437015   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:23.437015   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:23.439787   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:24.440703   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:24.440703   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:24.444019   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:25.444311   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:25.444311   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:25.447609   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:26.447984   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:26.448512   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:26.452794   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:27.453187   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:27.453187   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:27.455976   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:28.456871   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:28.456871   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:28.461251   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1216 04:56:28.461251   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:28.461251   10816 type.go:168] "Request Body" body=""
	I1216 04:56:28.461251   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:28.463526   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:29.463858   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:29.464259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:29.466878   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:30.467194   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:30.467194   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:30.470413   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:31.471156   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:31.471156   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:31.474353   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:32.475039   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:32.475637   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:32.478555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:33.479704   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:33.479704   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:33.483474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:34.483723   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:34.483723   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:34.486979   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:35.487257   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:35.487257   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:35.491469   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:36.492018   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:36.492018   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:36.495190   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:37.495789   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:37.495789   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:37.500106   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:38.500394   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:38.500394   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:38.503378   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1216 04:56:38.503378   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:38.503599   10816 type.go:168] "Request Body" body=""
	I1216 04:56:38.503670   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:38.505160   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:56:39.506481   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:39.506804   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:39.510121   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:40.511348   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:40.511515   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:40.513938   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:41.514571   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:41.514571   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:41.517965   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:42.518471   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:42.518471   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:42.521751   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:43.521949   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:43.521949   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:43.525274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:44.525475   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:44.525475   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:44.529537   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:45.530250   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:45.530521   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:45.533288   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:46.533897   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:46.533897   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:46.537801   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:47.538390   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:47.538390   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:47.541816   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:48.542450   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:48.542450   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:48.546099   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:48.546175   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:48.546220   10816 type.go:168] "Request Body" body=""
	I1216 04:56:48.546387   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:48.549486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:49.549740   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:49.549740   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:49.552741   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:50.552975   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:50.552975   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:50.555719   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:51.556671   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:51.557087   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:51.559469   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:52.560456   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:52.560456   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:52.562873   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:53.564181   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:53.564582   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:53.567897   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:54.568380   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:54.568380   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:54.571311   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:55.571743   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:55.572099   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:55.575412   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:56.575643   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:56.575643   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:56.578246   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:57.579469   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:57.579837   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:57.582643   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:58.583174   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:58.583174   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:58.586391   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:58.586391   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:58.586391   10816 type.go:168] "Request Body" body=""
	I1216 04:56:58.586391   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:58.589558   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:59.589768   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:59.589768   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:59.592754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:00.593373   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:00.593373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:00.596016   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:01.596725   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:01.596725   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:01.600189   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:02.600353   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:02.600353   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:02.603717   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:03.604325   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:03.604325   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:03.607595   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:04.607869   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:04.607869   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:04.611932   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:05.612128   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:05.612128   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:05.615243   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:06.616295   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:06.616781   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:06.619760   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:07.620272   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:07.620272   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:07.623644   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:08.623726   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:08.624232   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:08.626961   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:57:08.626961   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:08.626961   10816 type.go:168] "Request Body" body=""
	I1216 04:57:08.626961   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:08.629859   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:09.630419   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:09.630419   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:09.633878   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:10.634244   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:10.634244   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:10.637456   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:11.637797   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:11.637797   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:11.641669   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:12.642380   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:12.642380   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:12.644941   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:13.645547   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:13.645547   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:13.649321   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:14.649513   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:14.649513   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:14.652510   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:15.652980   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:15.652980   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:15.656319   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:16.656586   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:16.656586   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:16.659754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:17.659826   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:17.659826   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:17.663603   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:18.664062   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:18.664062   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:18.667107   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:18.667107   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:18.667107   10816 type.go:168] "Request Body" body=""
	I1216 04:57:18.667107   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:18.669486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:19.670016   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:19.670016   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:19.672638   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:20.673464   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:20.673464   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:20.677620   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:21.678112   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:21.678112   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:21.681513   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:22.681689   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:22.681995   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:22.685092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:23.685629   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:23.685980   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:23.689156   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:24.689510   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:24.689510   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:24.692985   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:25.693807   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:25.693807   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:25.697191   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:26.697691   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:26.697691   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:26.701914   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:27.702516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:27.702516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:27.705661   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:28.706672   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:28.706672   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:28.709206   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:57:28.709740   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:28.709807   10816 type.go:168] "Request Body" body=""
	I1216 04:57:28.709807   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:28.711563   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:57:29.711944   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:29.712335   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:29.715833   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:30.716017   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:30.716017   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:30.718719   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:31.719441   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:31.719441   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:31.722783   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:32.722947   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:32.723366   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:32.726287   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:33.726757   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:33.726757   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:33.730225   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:34.730767   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:34.730767   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:34.734197   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:35.734516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:35.734516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:35.738082   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:36.738414   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:36.738414   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:36.741636   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:37.742028   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:37.742028   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:37.745720   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:38.746648   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:38.746648   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:38.750213   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:38.750735   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:38.750811   10816 type.go:168] "Request Body" body=""
	I1216 04:57:38.750811   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:38.753365   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:39.754170   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:39.754494   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:39.756672   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:40.757075   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:40.757075   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:40.760090   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:41.761085   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:41.761085   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:41.764167   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:42.764607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:42.764607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:42.767925   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:43.768223   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:43.768223   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:43.771724   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:44.772020   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:44.772318   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:44.775672   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:45.776480   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:45.776480   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:45.778942   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:46.779437   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:46.779437   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:46.782462   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:47.783516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:47.783516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:47.786792   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:48.787104   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:48.787104   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:48.790218   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:48.790218   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:48.790333   10816 type.go:168] "Request Body" body=""
	I1216 04:57:48.790436   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:48.792857   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:49.793117   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:49.793422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:49.796265   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:50.797034   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:50.797034   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:50.800135   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:51.800692   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:51.800692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:51.803658   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:52.804509   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:52.804920   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:52.807718   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:53.808691   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:53.808691   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:53.811500   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:54.812293   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:54.812293   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:54.815510   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:55.815794   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:55.815794   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:55.818451   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:56.819222   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:56.819222   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:56.822148   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:57.823367   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:57.823367   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:57.826238   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:58.827282   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:58.827282   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:58.831278   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:58.831278   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:58.831278   10816 type.go:168] "Request Body" body=""
	I1216 04:57:58.831278   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:58.834101   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:59.834865   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:59.834865   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:59.838005   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:00.838338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:00.838338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:00.842079   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:01.842320   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:01.842587   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:01.846536   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:02.846765   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:02.846765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:02.849370   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:03.850175   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:03.850175   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:03.853386   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:04.853868   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:04.854373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:04.857431   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:05.858201   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:05.858471   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:05.860804   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:06.862215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:06.862215   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:06.865083   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:07.865404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:07.865848   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:07.868243   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:08.868442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:08.868783   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:08.871646   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:08.871738   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:08.871913   10816 type.go:168] "Request Body" body=""
	I1216 04:58:08.872023   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:08.874694   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:09.875136   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:09.875136   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:09.878881   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:10.879915   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:10.880365   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:10.883263   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:11.883912   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:11.883912   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:11.887249   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:12.888328   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:12.888328   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:12.891295   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:13.891657   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:13.891657   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:13.895474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:14.896600   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:14.896600   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:14.900025   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:15.900244   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:15.900674   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:15.903477   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:16.903646   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:16.904044   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:16.906787   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:17.907771   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:17.908158   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:17.910577   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:18.911153   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:18.911153   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:18.914890   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:58:18.914948   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:18.914948   10816 type.go:168] "Request Body" body=""
	I1216 04:58:18.914948   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:18.917403   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:19.918088   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:19.918527   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:19.921232   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:20.921801   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:20.921801   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:20.925689   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:21.925981   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:21.925981   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:21.929421   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:22.929692   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:22.929692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:22.934085   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:23.934312   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:23.934757   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:23.937761   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:24.938769   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:24.939209   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:24.942444   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:25.943100   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:25.943100   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:25.945226   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:26.945701   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:26.946109   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:26.947829   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:27.948365   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:27.948365   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:27.951830   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:28.952454   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:28.952454   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:28.956623   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1216 04:58:28.956759   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:28.956909   10816 type.go:168] "Request Body" body=""
	I1216 04:58:28.956990   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:28.959476   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:29.960256   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:29.960546   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:29.963746   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:30.964110   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:30.964110   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:30.967396   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:31.967947   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:31.967947   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:31.971619   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:32.972256   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:32.972256   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:32.975092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:33.975992   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:33.975992   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:33.979330   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:34.979792   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:34.980275   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:34.985587   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:58:35.985861   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:35.985861   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:35.988919   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:36.989563   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:36.989563   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:36.993055   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:37.993776   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:37.993776   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:37.997175   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:38.998214   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:38.998214   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:39.001897   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:58:39.001897   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:39.001897   10816 type.go:168] "Request Body" body=""
	I1216 04:58:39.001897   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:39.006108   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:40.006288   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:40.006288   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:40.009323   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:41.009760   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:41.009760   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:41.013530   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:42.013827   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:42.013827   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:42.017014   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:43.018254   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:43.018254   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:43.020804   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:44.021283   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:44.021578   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:44.025175   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:45.025733   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:45.026038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:45.028762   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:46.029139   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:46.029139   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:46.032822   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:47.033121   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:47.033121   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:47.036186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:48.037338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:48.037338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:48.041634   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:49.041943   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:49.041943   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:49.044552   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:49.044552   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:49.045136   10816 type.go:168] "Request Body" body=""
	I1216 04:58:49.045179   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:49.047881   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:50.048858   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:50.049289   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:50.052681   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:51.053215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:51.053675   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:51.055662   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:52.056918   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:52.056918   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:52.060467   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:53.061555   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:53.061992   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:53.063425   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:54.065095   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:54.065095   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:54.067617   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:55.068285   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:55.068285   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:55.071811   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:56.072296   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:56.072296   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:56.074442   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:57.075200   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:57.075200   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:57.078550   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:58.079588   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:58.079588   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:58.082364   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:59.083252   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:59.083252   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:59.085627   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:59.085627   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:59.085627   10816 type.go:168] "Request Body" body=""
	I1216 04:58:59.085627   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:59.088880   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:00.089932   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:00.090292   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:00.093204   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:01.093501   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:01.093501   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:01.096419   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:02.096985   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:02.096985   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:02.099764   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:03.100341   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:03.100341   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:03.103928   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:04.103977   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:04.103977   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:04.107337   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:05.108232   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:05.108232   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:05.110967   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:06.112125   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:06.112125   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:06.115328   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:07.115765   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:07.115765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:07.119250   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:08.119457   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:08.119457   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:08.122449   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:09.122631   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:09.122631   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:09.125978   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:09.126506   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:09.126611   10816 type.go:168] "Request Body" body=""
	I1216 04:59:09.126692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:09.128714   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:10.129007   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:10.129007   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:10.132112   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:11.132462   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:11.132909   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:11.135945   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:12.136431   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:12.136431   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:12.139277   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:13.140319   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:13.140319   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:13.143791   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:14.144673   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:14.144969   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:14.147133   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:15.148066   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:15.148066   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:15.151666   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:16.152576   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:16.152576   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:16.155181   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:17.155710   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:17.155710   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:17.158668   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:18.159541   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:18.159541   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:18.163278   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:19.163911   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:19.163911   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:19.167509   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:19.167509   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:19.167509   10816 type.go:168] "Request Body" body=""
	I1216 04:59:19.167509   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:19.170448   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:20.170687   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:20.170687   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:20.173841   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:21.174586   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:21.174671   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:21.177173   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:22.177927   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:22.177927   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:22.181163   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:23.181445   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:23.181445   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:23.184486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:24.184984   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:24.184984   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:24.188169   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:25.189332   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:25.189332   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:25.192735   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:26.193626   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:26.193973   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:26.198186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:27.198396   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:27.198396   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:27.201696   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:28.202442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:28.202442   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:28.205986   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:29.206746   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:29.207127   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:29.209566   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:59:29.209566   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:29.209566   10816 type.go:168] "Request Body" body=""
	I1216 04:59:29.210103   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:29.212125   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:30.212524   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:30.212524   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:30.215655   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:31.216215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:31.216215   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:31.219690   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:32.220046   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:32.220046   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:32.223009   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:33.223314   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:33.223314   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:33.227018   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:34.227625   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:34.227625   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:34.230861   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:35.230966   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:35.230966   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:35.233871   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:36.234450   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:36.234450   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:36.238041   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:37.238279   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:37.238279   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:37.242076   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:38.242327   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:38.242667   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:38.244855   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:39.245186   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:39.245186   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:39.248453   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:39.248453   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:39.248453   10816 type.go:168] "Request Body" body=""
	I1216 04:59:39.248453   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:39.251221   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:40.252169   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:40.252169   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:40.255087   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:41.255519   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:41.255519   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:41.258620   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:42.258899   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:42.258899   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:42.262729   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:43.262828   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:43.263200   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:43.266061   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:44.266376   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:44.266376   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:44.269929   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:45.270664   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:45.270664   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:45.273706   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:46.274385   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:46.274490   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:46.277222   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:47.277605   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:47.277605   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:47.280855   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:48.281379   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:48.281379   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:48.284989   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:49.285064   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:49.285064   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:49.288248   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:49.288292   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:49.288292   10816 type.go:168] "Request Body" body=""
	I1216 04:59:49.288292   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:49.290985   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:50.292197   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:50.292197   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:50.295316   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:51.295720   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:51.295720   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:51.299727   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:59:52.299933   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:52.300336   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:52.302657   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:53.303447   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:53.303447   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:53.306915   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:54.307348   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:54.307348   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:54.311155   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:55.311730   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:55.311730   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:55.315225   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:56.315472   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:56.315472   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:56.318408   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:57.319302   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:57.319302   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:57.322311   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:58.323301   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:58.323301   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:58.326036   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:59.326779   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:59.327147   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:59.330755   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:59.330828   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:59.330946   10816 type.go:168] "Request Body" body=""
	I1216 04:59:59.331049   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:59.334070   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:00.334751   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:00.335172   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:00.337839   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:01.338521   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:01.338521   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:01.341452   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:02.342326   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:02.342746   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:02.345360   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:03.346006   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:03.346006   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:03.349240   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:04.349594   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:04.349594   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:04.352907   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:05.354033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:05.354033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:05.357772   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:06.357911   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:06.358319   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:06.360594   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:07.361136   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:07.361136   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:07.364543   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 05:00:07.871664   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 05:00:07.871664   10816 node_ready.go:38] duration metric: took 6m0.0002013s for node "functional-002200" to be "Ready" ...
	I1216 05:00:07.876577   10816 out.go:203] 
	W1216 05:00:07.879616   10816 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 05:00:07.879616   10816 out.go:285] * 
	W1216 05:00:07.881276   10816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:00:07.884672   10816 out.go:203] 
	
	
	==> Docker <==
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532904868Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532910769Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532962273Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.533000176Z" level=info msg="Initializing buildkit"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.632934284Z" level=info msg="Completed buildkit initialization"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638730325Z" level=info msg="Daemon has completed initialization"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638930540Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 04:54:04 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638973643Z" level=info msg="API listen on [::]:2376"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638987344Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 04:54:04 functional-002200 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 04:54:04 functional-002200 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 16 04:54:04 functional-002200 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 04:54:05 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Loaded network plugin cni"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 04:54:05 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:02:17.265503   20219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:02:17.266436   20219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:02:17.268055   20219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:02:17.269999   20219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:02:17.271445   20219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001061] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001041] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001007] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000838] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001072] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 04:54] CPU: 8 PID: 53756 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001140] RIP: 0033:0x7f1fa5473b20
	[  +0.000543] Code: Unable to access opcode bytes at RIP 0x7f1fa5473af6.
	[  +0.001042] RSP: 002b:00007ffde8c4f290 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000944] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001046] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000944] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001149] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000795] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000802] FS:  0000000000000000 GS:  0000000000000000
	[  +0.814553] CPU: 10 PID: 53882 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000797] RIP: 0033:0x7f498f339b20
	[  +0.000391] Code: Unable to access opcode bytes at RIP 0x7f498f339af6.
	[  +0.000625] RSP: 002b:00007ffc77d465d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000824] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000761] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000760] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000766] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000779] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000780] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:02:17 up 38 min,  0 user,  load average: 0.37, 0.41, 0.57
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:02:14 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:02:14 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 985.
	Dec 16 05:02:14 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:14 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:14 functional-002200 kubelet[20055]: E1216 05:02:14.855485   20055 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:02:14 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:02:14 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:02:15 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 986.
	Dec 16 05:02:15 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:15 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:15 functional-002200 kubelet[20067]: E1216 05:02:15.593084   20067 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:02:15 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:02:15 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:02:16 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 987.
	Dec 16 05:02:16 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:16 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:16 functional-002200 kubelet[20094]: E1216 05:02:16.329737   20094 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:02:16 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:02:16 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:02:17 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 988.
	Dec 16 05:02:17 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:17 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:17 functional-002200 kubelet[20162]: E1216 05:02:17.112840   20162 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:02:17 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:02:17 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (563.1289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (53.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (3.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:750: failed to link kubectl binary from out/minikube-windows-amd64.exe to out\kubectl.exe: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 2 (574.2534ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.1646575s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-902700 ssh pgrep buildkitd                                                                                   │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image   │ functional-902700 image build -t localhost/my-image:functional-902700 testdata\build --alsologtostderr                  │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ service │ functional-902700 service hello-node --url                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image   │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format yaml --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format table --alsologtostderr                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format json --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ delete  │ -p functional-902700                                                                                                    │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │ 16 Dec 25 04:45 UTC │
	│ start   │ -p functional-002200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │                     │
	│ start   │ -p functional-002200 --alsologtostderr -v=8                                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:53 UTC │                     │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:3.1                                                                   │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:3.3                                                                   │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:latest                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add minikube-local-cache-test:functional-002200                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache delete minikube-local-cache-test:functional-002200                                              │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl images                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │                     │
	│ cache   │ functional-002200 cache reload                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ kubectl │ functional-002200 kubectl -- --context functional-002200 get pods                                                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:53:59
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:53:59.077529   10816 out.go:360] Setting OutFile to fd 1388 ...
	I1216 04:53:59.120079   10816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:53:59.120079   10816 out.go:374] Setting ErrFile to fd 1504...
	I1216 04:53:59.120079   10816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:53:59.134125   10816 out.go:368] Setting JSON to false
	I1216 04:53:59.136333   10816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1860,"bootTime":1765858978,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 04:53:59.136333   10816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 04:53:59.140588   10816 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 04:53:59.143257   10816 notify.go:221] Checking for updates...
	I1216 04:53:59.144338   10816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:53:59.146335   10816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:53:59.148852   10816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 04:53:59.153389   10816 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:53:59.155692   10816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:53:59.158810   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:53:59.158810   10816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:53:59.271386   10816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 04:53:59.275857   10816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:53:59.515409   10816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 04:53:59.497557869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:53:59.520423   10816 out.go:179] * Using the docker driver based on existing profile
	I1216 04:53:59.523406   10816 start.go:309] selected driver: docker
	I1216 04:53:59.523406   10816 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:53:59.523406   10816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:53:59.529406   10816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:53:59.757949   10816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 04:53:59.738153267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:53:59.838476   10816 cni.go:84] Creating CNI manager for ""
	I1216 04:53:59.838476   10816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:53:59.838997   10816 start.go:353] cluster config:
	{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:53:59.842569   10816 out.go:179] * Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	I1216 04:53:59.844586   10816 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 04:53:59.847541   10816 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:53:59.850024   10816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:53:59.850024   10816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:53:59.850184   10816 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 04:53:59.850253   10816 cache.go:65] Caching tarball of preloaded images
	I1216 04:53:59.850408   10816 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 04:53:59.850408   10816 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 04:53:59.850408   10816 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json ...
	I1216 04:53:59.925943   10816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 04:53:59.925943   10816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 04:53:59.926465   10816 cache.go:243] Successfully downloaded all kic artifacts
	I1216 04:53:59.926540   10816 start.go:360] acquireMachinesLock for functional-002200: {Name:mk1997a1c039b59133e065727b36e0e15b10eb3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:53:59.926717   10816 start.go:364] duration metric: took 124.8µs to acquireMachinesLock for "functional-002200"
	I1216 04:53:59.926803   10816 start.go:96] Skipping create...Using existing machine configuration
	I1216 04:53:59.926803   10816 fix.go:54] fixHost starting: 
	I1216 04:53:59.933877   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:53:59.985861   10816 fix.go:112] recreateIfNeeded on functional-002200: state=Running err=<nil>
	W1216 04:53:59.986777   10816 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 04:53:59.990712   10816 out.go:252] * Updating the running docker "functional-002200" container ...
	I1216 04:53:59.990712   10816 machine.go:94] provisionDockerMachine start ...
	I1216 04:53:59.994611   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.050133   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.050702   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.050702   10816 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:54:00.224414   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:54:00.224414   10816 ubuntu.go:182] provisioning hostname "functional-002200"
	I1216 04:54:00.228183   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.284942   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.285440   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.285501   10816 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-002200 && echo "functional-002200" | sudo tee /etc/hostname
	I1216 04:54:00.466400   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 04:54:00.469396   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.520394   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:00.520394   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:00.521395   10816 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-002200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-002200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-002200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:54:00.690074   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:54:00.690074   10816 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 04:54:00.690074   10816 ubuntu.go:190] setting up certificates
	I1216 04:54:00.690074   10816 provision.go:84] configureAuth start
	I1216 04:54:00.694148   10816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:54:00.751989   10816 provision.go:143] copyHostCerts
	I1216 04:54:00.752186   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1216 04:54:00.752528   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 04:54:00.752557   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 04:54:00.752557   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 04:54:00.753298   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1216 04:54:00.753298   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 04:54:00.753298   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 04:54:00.754021   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 04:54:00.754554   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1216 04:54:00.754554   10816 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 04:54:00.754554   10816 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 04:54:00.755135   10816 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 04:54:00.755694   10816 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-002200 san=[127.0.0.1 192.168.49.2 functional-002200 localhost minikube]
	I1216 04:54:00.834817   10816 provision.go:177] copyRemoteCerts
	I1216 04:54:00.838808   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:54:00.841808   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:00.896045   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:01.027660   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1216 04:54:01.027660   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 04:54:01.054957   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1216 04:54:01.054957   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 04:54:01.077598   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1216 04:54:01.077598   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:54:01.104237   10816 provision.go:87] duration metric: took 414.1604ms to configureAuth
	I1216 04:54:01.104237   10816 ubuntu.go:206] setting minikube options for container-runtime
	I1216 04:54:01.105157   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:54:01.110636   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.168864   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.169525   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.169551   10816 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 04:54:01.355861   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 04:54:01.355861   10816 ubuntu.go:71] root file system type: overlay
	I1216 04:54:01.355861   10816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 04:54:01.359632   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.417983   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.418643   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.418643   10816 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 04:54:01.607477   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 04:54:01.611072   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.665669   10816 main.go:143] libmachine: Using SSH client type: native
	I1216 04:54:01.666241   10816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 04:54:01.666241   10816 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 04:54:01.838018   10816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:54:01.838065   10816 machine.go:97] duration metric: took 1.8473421s to provisionDockerMachine
	I1216 04:54:01.838112   10816 start.go:293] postStartSetup for "functional-002200" (driver="docker")
	I1216 04:54:01.838112   10816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:54:01.842730   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:54:01.845927   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:01.899710   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.030948   10816 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:54:02.037585   10816 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1216 04:54:02.037585   10816 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION_ID="12"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION="12 (bookworm)"
	I1216 04:54:02.037585   10816 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1216 04:54:02.037585   10816 command_runner.go:130] > ID=debian
	I1216 04:54:02.037585   10816 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1216 04:54:02.037585   10816 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1216 04:54:02.037585   10816 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1216 04:54:02.037585   10816 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 04:54:02.037585   10816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 04:54:02.037585   10816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 04:54:02.037585   10816 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 04:54:02.038695   10816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 04:54:02.038739   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> /etc/ssl/certs/117042.pem
	I1216 04:54:02.039358   10816 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> hosts in /etc/test/nested/copy/11704
	I1216 04:54:02.039390   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> /etc/test/nested/copy/11704/hosts
	I1216 04:54:02.043645   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11704
	I1216 04:54:02.054687   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 04:54:02.077250   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts --> /etc/test/nested/copy/11704/hosts (40 bytes)
	I1216 04:54:02.106199   10816 start.go:296] duration metric: took 268.0858ms for postStartSetup
	I1216 04:54:02.110518   10816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:54:02.114167   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.171516   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.294935   10816 command_runner.go:130] > 1%
	I1216 04:54:02.299449   10816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 04:54:02.309560   10816 command_runner.go:130] > 950G
	I1216 04:54:02.309560   10816 fix.go:56] duration metric: took 2.3827424s for fixHost
	I1216 04:54:02.309560   10816 start.go:83] releasing machines lock for "functional-002200", held for 2.3828036s
	I1216 04:54:02.313570   10816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 04:54:02.366171   10816 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 04:54:02.371688   10816 ssh_runner.go:195] Run: cat /version.json
	I1216 04:54:02.371747   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.373884   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:02.425495   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.428440   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:02.530908   10816 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1216 04:54:02.530908   10816 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 04:54:02.552908   10816 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1216 04:54:02.557959   10816 ssh_runner.go:195] Run: systemctl --version
	I1216 04:54:02.566291   10816 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1216 04:54:02.566291   10816 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1216 04:54:02.571531   10816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 04:54:02.582535   10816 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1216 04:54:02.582535   10816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:54:02.587977   10816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:54:02.599631   10816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 04:54:02.599684   10816 start.go:496] detecting cgroup driver to use...
	I1216 04:54:02.599733   10816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:54:02.599952   10816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:54:02.620915   10816 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1216 04:54:02.625275   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 04:54:02.642513   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 04:54:02.658404   10816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 04:54:02.664249   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 04:54:02.683612   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:54:02.703566   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 04:54:02.723114   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 04:54:02.741121   10816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:54:02.760533   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	W1216 04:54:02.771378   10816 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 04:54:02.771378   10816 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 04:54:02.781609   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 04:54:02.800465   10816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 04:54:02.819380   10816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:54:02.832241   10816 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1216 04:54:02.836457   10816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:54:02.854943   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:02.994394   10816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 04:54:03.139472   10816 start.go:496] detecting cgroup driver to use...
	I1216 04:54:03.139472   10816 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 04:54:03.143391   10816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 04:54:03.162559   10816 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1216 04:54:03.162559   10816 command_runner.go:130] > [Unit]
	I1216 04:54:03.162559   10816 command_runner.go:130] > Description=Docker Application Container Engine
	I1216 04:54:03.162647   10816 command_runner.go:130] > Documentation=https://docs.docker.com
	I1216 04:54:03.162647   10816 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1216 04:54:03.162647   10816 command_runner.go:130] > Wants=network-online.target containerd.service
	I1216 04:54:03.162647   10816 command_runner.go:130] > Requires=docker.socket
	I1216 04:54:03.162689   10816 command_runner.go:130] > StartLimitBurst=3
	I1216 04:54:03.162689   10816 command_runner.go:130] > StartLimitIntervalSec=60
	I1216 04:54:03.162734   10816 command_runner.go:130] > [Service]
	I1216 04:54:03.162734   10816 command_runner.go:130] > Type=notify
	I1216 04:54:03.162734   10816 command_runner.go:130] > Restart=always
	I1216 04:54:03.162734   10816 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1216 04:54:03.162807   10816 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1216 04:54:03.162828   10816 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1216 04:54:03.162828   10816 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1216 04:54:03.162828   10816 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1216 04:54:03.162900   10816 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1216 04:54:03.162917   10816 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1216 04:54:03.162917   10816 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1216 04:54:03.162917   10816 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1216 04:54:03.162917   10816 command_runner.go:130] > ExecStart=
	I1216 04:54:03.162980   10816 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1216 04:54:03.162980   10816 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1216 04:54:03.163008   10816 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1216 04:54:03.163008   10816 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitNOFILE=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitNPROC=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > LimitCORE=infinity
	I1216 04:54:03.163008   10816 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1216 04:54:03.163065   10816 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1216 04:54:03.163065   10816 command_runner.go:130] > TasksMax=infinity
	I1216 04:54:03.163065   10816 command_runner.go:130] > TimeoutStartSec=0
	I1216 04:54:03.163065   10816 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1216 04:54:03.163112   10816 command_runner.go:130] > Delegate=yes
	I1216 04:54:03.163112   10816 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1216 04:54:03.163112   10816 command_runner.go:130] > KillMode=process
	I1216 04:54:03.163112   10816 command_runner.go:130] > OOMScoreAdjust=-500
	I1216 04:54:03.163112   10816 command_runner.go:130] > [Install]
	I1216 04:54:03.163112   10816 command_runner.go:130] > WantedBy=multi-user.target
	I1216 04:54:03.167400   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:54:03.188934   10816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:54:03.279029   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:54:03.300208   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 04:54:03.316692   10816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:54:03.338834   10816 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1216 04:54:03.343609   10816 ssh_runner.go:195] Run: which cri-dockerd
	I1216 04:54:03.350066   10816 command_runner.go:130] > /usr/bin/cri-dockerd
	I1216 04:54:03.355212   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 04:54:03.369229   10816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 04:54:03.392646   10816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 04:54:03.524584   10816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 04:54:03.661458   10816 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 04:54:03.661598   10816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 04:54:03.685520   10816 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 04:54:03.708589   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:03.845683   10816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 04:54:04.645791   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:54:04.667182   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 04:54:04.690401   10816 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 04:54:04.718176   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:54:04.738992   10816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 04:54:04.903819   10816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 04:54:05.034592   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:05.166883   10816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 04:54:05.190738   10816 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 04:54:05.211273   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:05.344748   10816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 04:54:05.446097   10816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 04:54:05.463790   10816 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 04:54:05.471347   10816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 04:54:05.478565   10816 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1216 04:54:05.478565   10816 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1216 04:54:05.478565   10816 command_runner.go:130] > Device: 0,112	Inode: 1751        Links: 1
	I1216 04:54:05.478565   10816 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1216 04:54:05.478565   10816 command_runner.go:130] > Access: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] > Modify: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] > Change: 2025-12-16 04:54:05.344842281 +0000
	I1216 04:54:05.478565   10816 command_runner.go:130] >  Birth: -
	I1216 04:54:05.478565   10816 start.go:564] Will wait 60s for crictl version
	I1216 04:54:05.482816   10816 ssh_runner.go:195] Run: which crictl
	I1216 04:54:05.491459   10816 command_runner.go:130] > /usr/local/bin/crictl
	I1216 04:54:05.496033   10816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 04:54:05.533167   10816 command_runner.go:130] > Version:  0.1.0
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeName:  docker
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1216 04:54:05.533167   10816 command_runner.go:130] > RuntimeApiVersion:  v1
	I1216 04:54:05.533167   10816 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 04:54:05.536709   10816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:54:05.572362   10816 command_runner.go:130] > 29.1.3
	I1216 04:54:05.576856   10816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 04:54:05.612780   10816 command_runner.go:130] > 29.1.3
	I1216 04:54:05.616153   10816 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 04:54:05.619706   10816 cli_runner.go:164] Run: docker exec -t functional-002200 dig +short host.docker.internal
	I1216 04:54:05.740410   10816 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 04:54:05.744411   10816 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 04:54:05.751410   10816 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1216 04:54:05.754417   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:05.810199   10816 kubeadm.go:884] updating cluster {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:54:05.810199   10816 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:54:05.814984   10816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1216 04:54:05.850393   10816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1216 04:54:05.850393   10816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:05.850393   10816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:54:05.850393   10816 docker.go:621] Images already preloaded, skipping extraction
	I1216 04:54:05.852935   10816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1216 04:54:05.887286   10816 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1216 04:54:05.887286   10816 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:05.887286   10816 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 04:54:05.887286   10816 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:54:05.887286   10816 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1216 04:54:05.887286   10816 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-002200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:54:05.890789   10816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 04:54:05.960191   10816 command_runner.go:130] > cgroupfs
	I1216 04:54:05.960191   10816 cni.go:84] Creating CNI manager for ""
	I1216 04:54:05.960191   10816 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:54:05.960191   10816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:54:05.960723   10816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-002200 NodeName:functional-002200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:54:05.960947   10816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-002200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:54:05.964962   10816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubeadm
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubectl
	I1216 04:54:05.978770   10816 command_runner.go:130] > kubelet
	I1216 04:54:05.978770   10816 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:54:05.983615   10816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:54:05.994290   10816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 04:54:06.017936   10816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 04:54:06.036718   10816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1216 04:54:06.060901   10816 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 04:54:06.072426   10816 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1216 04:54:06.077308   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:06.213746   10816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:54:06.308797   10816 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200 for IP: 192.168.49.2
	I1216 04:54:06.308797   10816 certs.go:195] generating shared ca certs ...
	I1216 04:54:06.308797   10816 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:06.310511   10816 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 04:54:06.310511   10816 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 04:54:06.310511   10816 certs.go:257] generating profile certs ...
	I1216 04:54:06.311535   10816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key
	I1216 04:54:06.311853   10816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742
	I1216 04:54:06.312156   10816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key
	I1216 04:54:06.312187   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1216 04:54:06.312277   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1216 04:54:06.312360   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1216 04:54:06.312444   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1216 04:54:06.312580   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1216 04:54:06.312673   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1216 04:54:06.312777   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1216 04:54:06.312890   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 04:54:06.313261   10816 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 04:54:06.313261   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 04:54:06.313921   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 04:54:06.314135   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 04:54:06.314531   10816 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 04:54:06.314719   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.314759   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.314759   10816 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem -> /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.315394   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:54:06.342547   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 04:54:06.368689   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:54:06.393638   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:54:06.418640   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 04:54:06.453759   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 04:54:06.476256   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:54:06.500532   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:54:06.524928   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 04:54:06.552508   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:54:06.575232   10816 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 04:54:06.598894   10816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:54:06.620996   10816 ssh_runner.go:195] Run: openssl version
	I1216 04:54:06.631676   10816 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1216 04:54:06.636278   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.653246   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:54:06.670292   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.677576   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.677653   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.681684   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:54:06.724946   10816 command_runner.go:130] > b5213941
	I1216 04:54:06.729462   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:54:06.747149   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.764470   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 04:54:06.780610   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.787541   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.787541   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.791611   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 04:54:06.834505   10816 command_runner.go:130] > 51391683
	I1216 04:54:06.839668   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 04:54:06.856437   10816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.871735   10816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 04:54:06.888873   10816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.895775   10816 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.895828   10816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.900176   10816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 04:54:06.943961   10816 command_runner.go:130] > 3ec20f2e
	I1216 04:54:06.948620   10816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 04:54:06.964812   10816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:54:06.978768   10816 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:54:06.978768   10816 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1216 04:54:06.978768   10816 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1216 04:54:06.978768   10816 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1216 04:54:06.978768   10816 command_runner.go:130] > Access: 2025-12-16 04:49:55.262290705 +0000
	I1216 04:54:06.978768   10816 command_runner.go:130] > Modify: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.978768   10816 command_runner.go:130] > Change: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.978868   10816 command_runner.go:130] >  Birth: 2025-12-16 04:45:53.054773605 +0000
	I1216 04:54:06.982552   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 04:54:07.026352   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.030610   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 04:54:07.075026   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.079065   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 04:54:07.126638   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.131687   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 04:54:07.174667   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.179083   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 04:54:07.222822   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.227385   10816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 04:54:07.271975   10816 command_runner.go:130] > Certificate will not expire
	I1216 04:54:07.271975   10816 kubeadm.go:401] StartCluster: {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:54:07.276330   10816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 04:54:07.308756   10816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:54:07.320226   10816 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1216 04:54:07.320260   10816 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1216 04:54:07.320260   10816 command_runner.go:130] > /var/lib/minikube/etcd:
	I1216 04:54:07.320341   10816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 04:54:07.320341   10816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 04:54:07.325132   10816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 04:54:07.336047   10816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:54:07.339740   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.398431   10816 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-002200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.399021   10816 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-002200" cluster setting kubeconfig missing "functional-002200" context setting]
	I1216 04:54:07.399534   10816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.418099   10816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.418579   10816 kapi.go:59] client config for functional-002200: &rest.Config{Host:"https://127.0.0.1:49316", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff78e429080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:54:07.419732   10816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 04:54:07.419805   10816 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 04:54:07.424264   10816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 04:54:07.438954   10816 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1216 04:54:07.439621   10816 kubeadm.go:602] duration metric: took 119.279ms to restartPrimaryControlPlane
	I1216 04:54:07.439621   10816 kubeadm.go:403] duration metric: took 167.6444ms to StartCluster
	I1216 04:54:07.439621   10816 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.439755   10816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.440821   10816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:54:07.441789   10816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 04:54:07.441839   10816 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 04:54:07.442048   10816 addons.go:70] Setting storage-provisioner=true in profile "functional-002200"
	I1216 04:54:07.442048   10816 addons.go:70] Setting default-storageclass=true in profile "functional-002200"
	I1216 04:54:07.442130   10816 addons.go:239] Setting addon storage-provisioner=true in "functional-002200"
	I1216 04:54:07.442130   10816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-002200"
	I1216 04:54:07.442187   10816 host.go:66] Checking if "functional-002200" exists ...
	I1216 04:54:07.442187   10816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 04:54:07.445437   10816 out.go:179] * Verifying Kubernetes components...
	I1216 04:54:07.450118   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.450857   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.452175   10816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:54:07.507771   10816 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:54:07.508167   10816 kapi.go:59] client config for functional-002200: &rest.Config{Host:"https://127.0.0.1:49316", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff78e429080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 04:54:07.508951   10816 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1216 04:54:07.508951   10816 addons.go:239] Setting addon default-storageclass=true in "functional-002200"
	I1216 04:54:07.508951   10816 host.go:66] Checking if "functional-002200" exists ...
	I1216 04:54:07.517556   10816 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 04:54:07.537496   10816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:54:07.540287   10816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:07.540287   10816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:54:07.546774   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.582442   10816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:07.582442   10816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:54:07.586285   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.606994   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:07.636962   10816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 04:54:07.645869   10816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:54:07.765470   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:07.777346   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:07.811577   10816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 04:54:07.866167   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 node_ready.go:35] waiting up to 6m0s for node "functional-002200" to be "Ready" ...
	W1216 04:54:07.869156   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.869156   10816 retry.go:31] will retry after 143.37804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.870154   10816 type.go:168] "Request Body" body=""
	I1216 04:54:07.870154   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	W1216 04:54:07.870154   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.870154   10816 retry.go:31] will retry after 150.951622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:07.872075   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:54:08.018062   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:08.025836   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:08.095508   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.099951   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.099951   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.099951   10816 retry.go:31] will retry after 537.200798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.103237   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.103772   10816 retry.go:31] will retry after 434.961679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.544092   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:08.626905   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.632935   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.632935   10816 retry.go:31] will retry after 617.835459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.641591   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:08.717034   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:08.721285   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.721336   10816 retry.go:31] will retry after 555.435942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:08.872382   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:08.872382   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:08.874726   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:09.256223   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:09.281163   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:09.337874   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:09.342648   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.342648   10816 retry.go:31] will retry after 1.171657048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.351506   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:09.353684   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.353684   10816 retry.go:31] will retry after 716.560141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:09.875116   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:09.875116   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:09.878246   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:10.075942   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:10.149131   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:10.153724   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.153724   10816 retry.go:31] will retry after 1.192910832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.520957   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:10.596120   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:10.600356   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.600356   10816 retry.go:31] will retry after 814.376196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:10.878697   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:10.879061   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:10.882391   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:11.351917   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:11.419047   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:11.435699   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:11.435794   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.435828   10816 retry.go:31] will retry after 2.202073619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.493635   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:11.497994   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.498062   10816 retry.go:31] will retry after 2.124694715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:11.883396   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:11.883898   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:11.886348   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:12.886583   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:12.886583   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:12.889839   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:13.629430   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:13.643127   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:13.719202   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:13.719202   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 retry.go:31] will retry after 3.773255134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:13.719202   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.719202   10816 retry.go:31] will retry after 2.024299182s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:13.890150   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:13.890150   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:13.893004   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:14.893300   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:14.893707   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:14.896357   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:15.748924   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:15.832154   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:15.836153   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:15.836153   10816 retry.go:31] will retry after 4.710098408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:15.897470   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:15.897470   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:15.900560   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:16.900812   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:16.900812   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:16.904208   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:17.498553   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:17.582081   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:17.582134   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:17.582134   10816 retry.go:31] will retry after 4.959220117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:17.904607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:17.904607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:17.907482   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:17.907482   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:17.907482   10816 type.go:168] "Request Body" body=""
	I1216 04:54:17.907482   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:17.910186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:18.910930   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:18.910930   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:18.913636   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:19.913975   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:19.913975   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:19.917442   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:20.551463   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:20.635939   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:20.635939   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:20.635939   10816 retry.go:31] will retry after 7.302087091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:20.917543   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:20.917543   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:20.922152   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:21.922714   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:21.923090   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:21.925451   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:22.546716   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:22.623025   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:22.626750   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:22.626750   10816 retry.go:31] will retry after 6.831180284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:22.925790   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:22.925790   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:22.929352   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:23.930014   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:23.930092   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:23.932838   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:24.933846   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:24.934195   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:24.936622   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:25.937442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:25.937516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:25.940094   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:26.940283   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:26.940283   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:26.943747   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:27.943504   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:27.945094   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:27.945165   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:27.947573   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:27.947626   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:27.947734   10816 type.go:168] "Request Body" body=""
	I1216 04:54:27.947766   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:27.950140   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:28.023100   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:28.027085   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:28.027085   10816 retry.go:31] will retry after 8.693676062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:28.950523   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:28.950523   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:28.955399   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:29.463172   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:29.548936   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:29.548936   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:29.551954   10816 retry.go:31] will retry after 8.541447036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:29.956404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:29.956404   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:29.959065   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:30.959708   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:30.959708   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:30.963012   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:31.964093   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:31.964093   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:31.967555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:32.968057   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:32.968057   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:32.970609   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:33.971778   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:33.971778   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:33.975447   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:34.975764   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:34.975764   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:34.980867   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:54:35.981702   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:35.981702   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:35.985092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:36.726019   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:36.801339   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:36.806868   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:36.806868   10816 retry.go:31] will retry after 11.085665292s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:36.986076   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:36.986076   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:36.989365   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:37.990461   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:37.990461   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:37.994420   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:54:37.994494   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:37.994613   10816 type.go:168] "Request Body" body=""
	I1216 04:54:37.994697   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:37.996806   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:38.098931   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:38.175856   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:38.181908   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:38.181908   10816 retry.go:31] will retry after 20.635277746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:38.997597   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:38.997597   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:39.000931   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:40.001375   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:40.001375   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:40.004974   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:41.005192   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:41.005192   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:41.007919   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:42.009105   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:42.009105   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:42.012612   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:43.013312   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:43.013312   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:43.016575   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:44.017297   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:44.017297   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:44.020296   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:45.020698   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:45.020698   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:45.023875   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:46.024607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:46.024607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:46.027947   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:47.028088   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:47.028746   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:47.032023   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:47.898206   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:54:47.976246   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:47.980090   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:47.980090   10816 retry.go:31] will retry after 12.179357603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:48.033037   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:48.033037   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:48.035808   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:48.035808   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:48.035808   10816 type.go:168] "Request Body" body=""
	I1216 04:54:48.035808   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:48.040977   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:54:49.041226   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:49.041572   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:49.043632   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:50.044672   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:50.044672   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:50.048807   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:51.049032   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:51.049032   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:51.051895   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:52.052810   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:52.052810   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:52.056184   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:53.056422   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:53.056422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:53.059030   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:54.059750   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:54.060113   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:54.063020   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:55.063099   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:55.063099   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:55.066474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:54:56.066822   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:56.066822   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:56.071205   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:57.071421   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:57.071421   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:57.073734   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:54:58.073939   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:58.073939   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:58.076906   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:54:58.076906   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:54:58.076906   10816 type.go:168] "Request Body" body=""
	I1216 04:54:58.076906   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:58.081072   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:54:58.823241   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:54:58.903750   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:54:58.908134   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:58.908134   10816 retry.go:31] will retry after 21.057070222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:54:59.081704   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:54:59.082161   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:54:59.085119   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:00.085233   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:00.085233   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:00.088190   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:00.165511   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:55:00.236692   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:00.240478   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:00.240478   10816 retry.go:31] will retry after 25.698880398s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:01.089206   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:01.089206   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:01.093274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:02.094123   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:02.094422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:02.097156   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:03.098295   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:03.098295   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:03.102257   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:04.103035   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:04.103035   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:04.106884   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:05.107465   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:05.107465   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:05.110542   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:06.112033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:06.112033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:06.114883   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:07.115061   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:07.115061   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:07.118200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:08.119287   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:08.119622   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:08.122289   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:55:08.122330   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:08.122429   10816 type.go:168] "Request Body" body=""
	I1216 04:55:08.122520   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:08.125754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:09.126342   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:09.126818   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:09.129086   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:10.129383   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:10.129722   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:10.133200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:11.134173   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:11.134173   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:11.136746   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:12.137338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:12.137338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:12.140387   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:13.140819   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:13.140819   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:13.144315   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:14.144624   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:14.144624   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:14.146619   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:15.148016   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:15.148016   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:15.150667   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:16.151188   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:16.151188   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:16.154512   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:17.154762   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:17.154762   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:17.157863   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:18.158498   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:18.158835   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:18.161129   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:55:18.161129   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:18.161666   10816 type.go:168] "Request Body" body=""
	I1216 04:55:18.161765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:18.165763   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:19.166375   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:19.166948   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:19.170530   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:19.970281   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:55:20.048987   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:20.052948   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:20.052948   10816 retry.go:31] will retry after 40.980819462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 04:55:20.171417   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:20.171417   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:20.174285   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:21.174459   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:21.174459   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:21.178349   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:22.178639   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:22.178639   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:22.182103   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:23.182373   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:23.182373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:23.186196   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:24.187572   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:24.187572   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:24.190721   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:25.191259   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:25.191259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:25.193863   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:25.945563   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:55:26.023336   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:26.027981   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:55:26.027981   10816 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:55:26.194033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:26.194033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:26.196611   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:27.198100   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:27.198100   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:27.201373   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:28.202260   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:28.202336   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:28.205520   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:28.205520   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:28.205520   10816 type.go:168] "Request Body" body=""
	I1216 04:55:28.205520   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:28.207479   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:29.208141   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:29.208141   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:29.210912   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:30.211277   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:30.211277   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:30.215183   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:31.215597   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:31.216087   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:31.220042   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:32.220845   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:32.220845   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:32.224468   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:33.225011   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:33.225011   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:33.227593   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:34.228072   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:34.228072   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:34.232200   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:35.233142   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:35.233142   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:35.236555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:36.236770   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:36.236770   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:36.239805   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:37.240445   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:37.240445   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:37.244092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:38.245044   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:38.245410   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:38.248594   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:38.248691   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:38.248769   10816 type.go:168] "Request Body" body=""
	I1216 04:55:38.248876   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:38.250514   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:39.251245   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:39.251245   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:39.254671   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:40.255034   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:40.255034   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:40.258153   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:41.259367   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:41.259367   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:41.262425   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:42.263082   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:42.263082   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:42.266116   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:43.266829   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:43.266829   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:43.270506   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:44.270759   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:44.270759   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:44.273660   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:45.274478   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:45.274478   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:45.278771   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:55:46.279173   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:46.279173   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:46.282053   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:47.282933   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:47.283421   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:47.285798   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:48.286808   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:48.286808   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:48.289962   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:48.289962   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:48.289962   10816 type.go:168] "Request Body" body=""
	I1216 04:55:48.290487   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:48.292914   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:49.293355   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:49.293355   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:49.296159   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:50.296781   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:50.296781   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:50.300274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:51.301342   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:51.301765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:51.304219   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:52.305071   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:52.305533   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:52.309249   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:53.309491   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:53.309873   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:53.312736   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:54.313186   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:54.313186   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:54.315728   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:55.316291   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:55.316291   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:55.318644   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:56.319270   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:56.319270   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:56.322306   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:55:57.322583   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:57.322583   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:57.325852   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:55:58.326685   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:58.326685   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:58.330655   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:55:58.330655   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:55:58.330655   10816 type.go:168] "Request Body" body=""
	I1216 04:55:58.330655   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:58.332638   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:55:59.333608   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:55:59.333608   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:55:59.337440   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:00.338469   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:00.338469   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:00.342007   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:01.039745   10816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:56:01.115386   10816 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:56:01.115386   10816 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 04:56:01.115924   10816 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 04:56:01.120162   10816 out.go:179] * Enabled addons: 
	I1216 04:56:01.123251   10816 addons.go:530] duration metric: took 1m53.6807689s for enable addons: enabled=[]
	I1216 04:56:01.342137   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:01.342137   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:01.346975   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:02.347223   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:02.347223   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:02.350951   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:03.351725   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:03.351725   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:03.355059   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:04.356296   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:04.356615   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:04.358992   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:05.359518   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:05.359518   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:05.362516   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:06.363038   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:06.363038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:06.366125   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:07.367111   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:07.367481   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:07.371966   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:08.372166   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:08.372166   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:08.375468   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:08.375993   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:08.376095   10816 type.go:168] "Request Body" body=""
	I1216 04:56:08.376172   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:08.378089   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:56:09.378463   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:09.378463   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:09.381670   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:10.382441   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:10.382810   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:10.385502   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:11.386065   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:11.386065   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:11.389374   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:12.389965   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:12.390333   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:12.393342   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:13.393761   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:13.393761   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:13.397642   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:14.398827   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:14.399038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:14.401820   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:15.402491   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:15.402491   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:15.406054   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:16.406137   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:16.406137   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:16.409329   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:17.410259   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:17.410259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:17.414120   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:18.414404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:18.414404   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:18.417494   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:18.417494   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:18.417494   10816 type.go:168] "Request Body" body=""
	I1216 04:56:18.417494   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:18.420441   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:19.421425   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:19.421425   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:19.424513   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:20.425579   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:20.425579   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:20.428886   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:21.429285   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:21.429285   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:21.433045   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:22.433638   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:22.433638   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:22.436697   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:23.437015   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:23.437015   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:23.439787   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:24.440703   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:24.440703   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:24.444019   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:25.444311   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:25.444311   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:25.447609   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:26.447984   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:26.448512   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:26.452794   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:27.453187   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:27.453187   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:27.455976   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:28.456871   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:28.456871   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:28.461251   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1216 04:56:28.461251   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:28.461251   10816 type.go:168] "Request Body" body=""
	I1216 04:56:28.461251   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:28.463526   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:29.463858   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:29.464259   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:29.466878   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:30.467194   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:30.467194   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:30.470413   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:31.471156   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:31.471156   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:31.474353   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:32.475039   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:32.475637   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:32.478555   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:33.479704   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:33.479704   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:33.483474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:34.483723   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:34.483723   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:34.486979   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:35.487257   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:35.487257   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:35.491469   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:36.492018   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:36.492018   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:36.495190   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:37.495789   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:37.495789   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:37.500106   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:38.500394   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:38.500394   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:38.503378   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1216 04:56:38.503378   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:38.503599   10816 type.go:168] "Request Body" body=""
	I1216 04:56:38.503670   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:38.505160   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:56:39.506481   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:39.506804   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:39.510121   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:40.511348   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:40.511515   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:40.513938   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:41.514571   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:41.514571   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:41.517965   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:42.518471   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:42.518471   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:42.521751   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:43.521949   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:43.521949   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:43.525274   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:44.525475   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:44.525475   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:44.529537   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:56:45.530250   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:45.530521   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:45.533288   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:46.533897   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:46.533897   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:46.537801   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:47.538390   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:47.538390   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:47.541816   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:48.542450   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:48.542450   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:48.546099   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:48.546175   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:48.546220   10816 type.go:168] "Request Body" body=""
	I1216 04:56:48.546387   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:48.549486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:49.549740   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:49.549740   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:49.552741   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:50.552975   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:50.552975   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:50.555719   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:51.556671   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:51.557087   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:51.559469   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:52.560456   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:52.560456   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:52.562873   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:53.564181   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:53.564582   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:53.567897   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:54.568380   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:54.568380   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:54.571311   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:55.571743   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:55.572099   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:55.575412   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:56.575643   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:56.575643   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:56.578246   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:57.579469   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:57.579837   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:57.582643   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:56:58.583174   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:58.583174   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:58.586391   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:56:58.586391   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:56:58.586391   10816 type.go:168] "Request Body" body=""
	I1216 04:56:58.586391   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:58.589558   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:56:59.589768   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:56:59.589768   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:56:59.592754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:00.593373   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:00.593373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:00.596016   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:01.596725   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:01.596725   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:01.600189   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:02.600353   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:02.600353   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:02.603717   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:03.604325   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:03.604325   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:03.607595   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:04.607869   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:04.607869   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:04.611932   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:05.612128   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:05.612128   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:05.615243   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:06.616295   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:06.616781   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:06.619760   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:07.620272   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:07.620272   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:07.623644   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:08.623726   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:08.624232   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:08.626961   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:57:08.626961   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:08.626961   10816 type.go:168] "Request Body" body=""
	I1216 04:57:08.626961   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:08.629859   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:09.630419   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:09.630419   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:09.633878   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:10.634244   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:10.634244   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:10.637456   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:11.637797   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:11.637797   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:11.641669   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:12.642380   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:12.642380   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:12.644941   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:13.645547   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:13.645547   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:13.649321   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:14.649513   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:14.649513   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:14.652510   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:15.652980   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:15.652980   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:15.656319   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:16.656586   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:16.656586   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:16.659754   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:17.659826   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:17.659826   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:17.663603   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:18.664062   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:18.664062   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:18.667107   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:18.667107   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:18.667107   10816 type.go:168] "Request Body" body=""
	I1216 04:57:18.667107   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:18.669486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:19.670016   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:19.670016   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:19.672638   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:20.673464   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:20.673464   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:20.677620   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:21.678112   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:21.678112   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:21.681513   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:22.681689   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:22.681995   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:22.685092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:23.685629   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:23.685980   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:23.689156   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:24.689510   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:24.689510   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:24.692985   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:25.693807   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:25.693807   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:25.697191   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:26.697691   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:26.697691   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:26.701914   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:57:27.702516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:27.702516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:27.705661   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:28.706672   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:28.706672   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:28.709206   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:57:28.709740   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:28.709807   10816 type.go:168] "Request Body" body=""
	I1216 04:57:28.709807   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:28.711563   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:57:29.711944   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:29.712335   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:29.715833   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:30.716017   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:30.716017   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:30.718719   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:31.719441   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:31.719441   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:31.722783   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:32.722947   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:32.723366   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:32.726287   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:33.726757   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:33.726757   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:33.730225   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:34.730767   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:34.730767   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:34.734197   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:35.734516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:35.734516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:35.738082   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:36.738414   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:36.738414   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:36.741636   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:37.742028   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:37.742028   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:37.745720   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:38.746648   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:38.746648   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:38.750213   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:38.750735   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:38.750811   10816 type.go:168] "Request Body" body=""
	I1216 04:57:38.750811   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:38.753365   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:39.754170   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:39.754494   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:39.756672   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:40.757075   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:40.757075   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:40.760090   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:41.761085   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:41.761085   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:41.764167   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:42.764607   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:42.764607   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:42.767925   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:43.768223   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:43.768223   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:43.771724   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:44.772020   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:44.772318   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:44.775672   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:45.776480   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:45.776480   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:45.778942   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:46.779437   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:46.779437   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:46.782462   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:47.783516   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:47.783516   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:47.786792   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:48.787104   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:48.787104   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:48.790218   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:48.790218   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:48.790333   10816 type.go:168] "Request Body" body=""
	I1216 04:57:48.790436   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:48.792857   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:49.793117   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:49.793422   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:49.796265   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:50.797034   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:50.797034   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:50.800135   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:51.800692   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:51.800692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:51.803658   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:52.804509   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:52.804920   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:52.807718   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:53.808691   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:53.808691   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:53.811500   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:54.812293   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:54.812293   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:54.815510   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:57:55.815794   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:55.815794   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:55.818451   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:56.819222   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:56.819222   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:56.822148   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:57.823367   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:57.823367   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:57.826238   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:58.827282   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:58.827282   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:58.831278   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:57:58.831278   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:57:58.831278   10816 type.go:168] "Request Body" body=""
	I1216 04:57:58.831278   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:58.834101   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:57:59.834865   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:57:59.834865   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:57:59.838005   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:00.838338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:00.838338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:00.842079   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:01.842320   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:01.842587   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:01.846536   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:02.846765   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:02.846765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:02.849370   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:03.850175   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:03.850175   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:03.853386   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:04.853868   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:04.854373   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:04.857431   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:05.858201   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:05.858471   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:05.860804   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:06.862215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:06.862215   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:06.865083   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:07.865404   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:07.865848   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:07.868243   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:08.868442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:08.868783   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:08.871646   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:08.871738   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:08.871913   10816 type.go:168] "Request Body" body=""
	I1216 04:58:08.872023   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:08.874694   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:09.875136   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:09.875136   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:09.878881   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:10.879915   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:10.880365   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:10.883263   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:11.883912   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:11.883912   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:11.887249   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:12.888328   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:12.888328   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:12.891295   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:13.891657   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:13.891657   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:13.895474   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:14.896600   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:14.896600   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:14.900025   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:15.900244   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:15.900674   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:15.903477   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:16.903646   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:16.904044   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:16.906787   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:17.907771   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:17.908158   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:17.910577   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:18.911153   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:18.911153   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:18.914890   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:58:18.914948   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:18.914948   10816 type.go:168] "Request Body" body=""
	I1216 04:58:18.914948   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:18.917403   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:19.918088   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:19.918527   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:19.921232   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:20.921801   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:20.921801   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:20.925689   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:21.925981   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:21.925981   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:21.929421   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:22.929692   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:22.929692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:22.934085   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:23.934312   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:23.934757   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:23.937761   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:24.938769   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:24.939209   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:24.942444   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:25.943100   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:25.943100   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:25.945226   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:26.945701   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:26.946109   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:26.947829   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:27.948365   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:27.948365   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:27.951830   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:28.952454   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:28.952454   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:28.956623   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1216 04:58:28.956759   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:28.956909   10816 type.go:168] "Request Body" body=""
	I1216 04:58:28.956990   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:28.959476   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:29.960256   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:29.960546   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:29.963746   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:30.964110   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:30.964110   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:30.967396   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:31.967947   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:31.967947   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:31.971619   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:32.972256   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:32.972256   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:32.975092   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:33.975992   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:33.975992   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:33.979330   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:34.979792   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:34.980275   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:34.985587   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1216 04:58:35.985861   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:35.985861   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:35.988919   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:36.989563   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:36.989563   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:36.993055   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:37.993776   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:37.993776   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:37.997175   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:38.998214   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:38.998214   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:39.001897   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:58:39.001897   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:39.001897   10816 type.go:168] "Request Body" body=""
	I1216 04:58:39.001897   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:39.006108   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:40.006288   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:40.006288   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:40.009323   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:41.009760   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:41.009760   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:41.013530   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:42.013827   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:42.013827   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:42.017014   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:43.018254   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:43.018254   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:43.020804   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:44.021283   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:44.021578   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:44.025175   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:45.025733   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:45.026038   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:45.028762   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:46.029139   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:46.029139   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:46.032822   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:47.033121   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:47.033121   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:47.036186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:48.037338   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:48.037338   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:48.041634   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:58:49.041943   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:49.041943   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:49.044552   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:49.044552   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:49.045136   10816 type.go:168] "Request Body" body=""
	I1216 04:58:49.045179   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:49.047881   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:50.048858   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:50.049289   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:50.052681   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:51.053215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:51.053675   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:51.055662   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:52.056918   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:52.056918   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:52.060467   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:53.061555   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:53.061992   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:53.063425   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1216 04:58:54.065095   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:54.065095   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:54.067617   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:55.068285   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:55.068285   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:55.071811   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:56.072296   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:56.072296   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:56.074442   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:57.075200   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:57.075200   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:57.078550   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:58:58.079588   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:58.079588   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:58.082364   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:58:59.083252   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:58:59.083252   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:59.085627   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:58:59.085627   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:58:59.085627   10816 type.go:168] "Request Body" body=""
	I1216 04:58:59.085627   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:58:59.088880   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:00.089932   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:00.090292   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:00.093204   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:01.093501   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:01.093501   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:01.096419   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:02.096985   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:02.096985   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:02.099764   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:03.100341   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:03.100341   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:03.103928   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:04.103977   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:04.103977   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:04.107337   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:05.108232   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:05.108232   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:05.110967   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:06.112125   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:06.112125   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:06.115328   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:07.115765   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:07.115765   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:07.119250   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:08.119457   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:08.119457   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:08.122449   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:09.122631   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:09.122631   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:09.125978   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:09.126506   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:09.126611   10816 type.go:168] "Request Body" body=""
	I1216 04:59:09.126692   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:09.128714   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:10.129007   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:10.129007   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:10.132112   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:11.132462   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:11.132909   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:11.135945   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:12.136431   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:12.136431   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:12.139277   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:13.140319   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:13.140319   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:13.143791   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:14.144673   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:14.144969   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:14.147133   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:15.148066   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:15.148066   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:15.151666   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:16.152576   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:16.152576   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:16.155181   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:17.155710   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:17.155710   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:17.158668   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:18.159541   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:18.159541   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:18.163278   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:19.163911   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:19.163911   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:19.167509   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:19.167509   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:19.167509   10816 type.go:168] "Request Body" body=""
	I1216 04:59:19.167509   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:19.170448   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:20.170687   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:20.170687   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:20.173841   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:21.174586   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:21.174671   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:21.177173   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:22.177927   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:22.177927   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:22.181163   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:23.181445   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:23.181445   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:23.184486   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:24.184984   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:24.184984   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:24.188169   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:25.189332   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:25.189332   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:25.192735   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:26.193626   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:26.193973   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:26.198186   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:27.198396   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:27.198396   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:27.201696   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:28.202442   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:28.202442   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:28.205986   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:29.206746   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:29.207127   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:29.209566   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1216 04:59:29.209566   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:29.209566   10816 type.go:168] "Request Body" body=""
	I1216 04:59:29.210103   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:29.212125   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:30.212524   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:30.212524   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:30.215655   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:31.216215   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:31.216215   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:31.219690   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:32.220046   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:32.220046   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:32.223009   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:33.223314   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:33.223314   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:33.227018   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:34.227625   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:34.227625   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:34.230861   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:35.230966   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:35.230966   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:35.233871   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:36.234450   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:36.234450   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:36.238041   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:37.238279   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:37.238279   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:37.242076   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:38.242327   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:38.242667   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:38.244855   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:39.245186   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:39.245186   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:39.248453   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:39.248453   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:39.248453   10816 type.go:168] "Request Body" body=""
	I1216 04:59:39.248453   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:39.251221   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:40.252169   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:40.252169   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:40.255087   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:41.255519   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:41.255519   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:41.258620   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:42.258899   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:42.258899   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:42.262729   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:43.262828   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:43.263200   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:43.266061   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:44.266376   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:44.266376   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:44.269929   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:45.270664   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:45.270664   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:45.273706   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:46.274385   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:46.274490   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:46.277222   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:47.277605   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:47.277605   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:47.280855   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:48.281379   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:48.281379   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:48.284989   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:49.285064   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:49.285064   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:49.288248   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:49.288292   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:49.288292   10816 type.go:168] "Request Body" body=""
	I1216 04:59:49.288292   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:49.290985   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:50.292197   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:50.292197   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:50.295316   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:51.295720   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:51.295720   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:51.299727   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1216 04:59:52.299933   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:52.300336   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:52.302657   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:53.303447   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:53.303447   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:53.306915   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:54.307348   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:54.307348   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:54.311155   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:55.311730   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:55.311730   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:55.315225   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:56.315472   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:56.315472   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:56.318408   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:57.319302   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:57.319302   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:57.322311   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 04:59:58.323301   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:58.323301   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:58.326036   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 04:59:59.326779   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 04:59:59.327147   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:59.330755   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 04:59:59.330828   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): Get "https://127.0.0.1:49316/api/v1/nodes/functional-002200": EOF
	I1216 04:59:59.330946   10816 type.go:168] "Request Body" body=""
	I1216 04:59:59.331049   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 04:59:59.334070   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:00.334751   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:00.335172   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:00.337839   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:01.338521   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:01.338521   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:01.341452   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:02.342326   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:02.342746   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:02.345360   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:03.346006   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:03.346006   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:03.349240   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:04.349594   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:04.349594   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:04.352907   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:05.354033   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:05.354033   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:05.357772   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1216 05:00:06.357911   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:06.358319   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:06.360594   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1216 05:00:07.361136   10816 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:49316/api/v1/nodes/functional-002200"
	I1216 05:00:07.361136   10816 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:49316/api/v1/nodes/functional-002200" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1216 05:00:07.364543   10816 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1216 05:00:07.871664   10816 node_ready.go:55] error getting node "functional-002200" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 05:00:07.871664   10816 node_ready.go:38] duration metric: took 6m0.0002013s for node "functional-002200" to be "Ready" ...
	I1216 05:00:07.876577   10816 out.go:203] 
	W1216 05:00:07.879616   10816 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 05:00:07.879616   10816 out.go:285] * 
	W1216 05:00:07.881276   10816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:00:07.884672   10816 out.go:203] 
	
	
	==> Docker <==
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532904868Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532910769Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.532962273Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.533000176Z" level=info msg="Initializing buildkit"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.632934284Z" level=info msg="Completed buildkit initialization"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638730325Z" level=info msg="Daemon has completed initialization"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638930540Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 04:54:04 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638973643Z" level=info msg="API listen on [::]:2376"
	Dec 16 04:54:04 functional-002200 dockerd[10564]: time="2025-12-16T04:54:04.638987344Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 04:54:04 functional-002200 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 04:54:04 functional-002200 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 16 04:54:04 functional-002200 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 04:54:05 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Loaded network plugin cni"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 04:54:05 functional-002200 cri-dockerd[10880]: time="2025-12-16T04:54:05Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 04:54:05 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:02:20.479916   20415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:02:20.480789   20415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:02:20.483151   20415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:02:20.484523   20415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:02:20.485435   20415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001061] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001041] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001007] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000838] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001072] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 04:54] CPU: 8 PID: 53756 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001140] RIP: 0033:0x7f1fa5473b20
	[  +0.000543] Code: Unable to access opcode bytes at RIP 0x7f1fa5473af6.
	[  +0.001042] RSP: 002b:00007ffde8c4f290 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000944] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001046] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000944] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001149] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000795] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000802] FS:  0000000000000000 GS:  0000000000000000
	[  +0.814553] CPU: 10 PID: 53882 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000797] RIP: 0033:0x7f498f339b20
	[  +0.000391] Code: Unable to access opcode bytes at RIP 0x7f498f339af6.
	[  +0.000625] RSP: 002b:00007ffc77d465d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000824] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000761] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000760] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000766] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000779] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000780] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:02:20 up 38 min,  0 user,  load average: 0.34, 0.40, 0.57
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:02:17 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:02:17 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 989.
	Dec 16 05:02:17 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:17 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:17 functional-002200 kubelet[20237]: E1216 05:02:17.849690   20237 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:02:17 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:02:17 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:02:18 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 990.
	Dec 16 05:02:18 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:18 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:18 functional-002200 kubelet[20251]: E1216 05:02:18.593117   20251 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:02:18 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:02:18 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:02:19 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 991.
	Dec 16 05:02:19 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:19 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:19 functional-002200 kubelet[20281]: E1216 05:02:19.355477   20281 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:02:19 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:02:19 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:02:20 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 992.
	Dec 16 05:02:20 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:20 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:02:20 functional-002200 kubelet[20309]: E1216 05:02:20.084154   20309 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:02:20 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:02:20 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (572.1717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (3.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (739.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-002200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 05:04:55.672693   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:06:18.743362   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:07:01.799722   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:09:55.674748   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:10:04.877620   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:12:01.803384   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-002200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m15.5780188s)

                                                
                                                
-- stdout --
	* [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000864945s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-002200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m15.5855503s for "functional-002200" cluster.
I1216 05:14:37.667728   11704 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 2 (589.7903ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.2719144s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-902700 image build -t localhost/my-image:functional-902700 testdata\build --alsologtostderr                  │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ service │ functional-902700 service hello-node --url                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image   │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format yaml --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format table --alsologtostderr                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format json --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ delete  │ -p functional-902700                                                                                                    │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │ 16 Dec 25 04:45 UTC │
	│ start   │ -p functional-002200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │                     │
	│ start   │ -p functional-002200 --alsologtostderr -v=8                                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:53 UTC │                     │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:3.1                                                                   │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:3.3                                                                   │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:latest                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add minikube-local-cache-test:functional-002200                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache delete minikube-local-cache-test:functional-002200                                              │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl images                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │                     │
	│ cache   │ functional-002200 cache reload                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ kubectl │ functional-002200 kubectl -- --context functional-002200 get pods                                                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │                     │
	│ start   │ -p functional-002200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:02:22
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:02:22.143364   13524 out.go:360] Setting OutFile to fd 1016 ...
	I1216 05:02:22.184929   13524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:22.184929   13524 out.go:374] Setting ErrFile to fd 816...
	I1216 05:02:22.184929   13524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:22.200191   13524 out.go:368] Setting JSON to false
	I1216 05:02:22.202193   13524 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2363,"bootTime":1765858978,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 05:02:22.202193   13524 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 05:02:22.207191   13524 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 05:02:22.209167   13524 notify.go:221] Checking for updates...
	I1216 05:02:22.213806   13524 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 05:02:22.217226   13524 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:02:22.219465   13524 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 05:02:22.221726   13524 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:02:22.223984   13524 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:02:22.226535   13524 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:02:22.226535   13524 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:02:22.342632   13524 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 05:02:22.345860   13524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:02:22.582056   13524 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-16 05:02:22.565555373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:02:22.589056   13524 out.go:179] * Using the docker driver based on existing profile
	I1216 05:02:22.591055   13524 start.go:309] selected driver: docker
	I1216 05:02:22.591055   13524 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:22.592055   13524 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:02:22.597056   13524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:02:22.818036   13524 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-16 05:02:22.800509482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:02:22.866190   13524 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:02:22.866190   13524 cni.go:84] Creating CNI manager for ""
	I1216 05:02:22.866190   13524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 05:02:22.866190   13524 start.go:353] cluster config:
	{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:22.870532   13524 out.go:179] * Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	I1216 05:02:22.874014   13524 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 05:02:22.876014   13524 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:02:22.880521   13524 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 05:02:22.880869   13524 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:02:22.880869   13524 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 05:02:22.880869   13524 cache.go:65] Caching tarball of preloaded images
	I1216 05:02:22.880869   13524 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 05:02:22.881393   13524 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 05:02:22.881584   13524 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json ...
	I1216 05:02:22.957945   13524 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:02:22.957945   13524 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:02:22.957945   13524 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:02:22.957945   13524 start.go:360] acquireMachinesLock for functional-002200: {Name:mk1997a1c039b59133e065727b36e0e15b10eb3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:02:22.957945   13524 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-002200"
	I1216 05:02:22.957945   13524 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:02:22.957945   13524 fix.go:54] fixHost starting: 
	I1216 05:02:22.964754   13524 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 05:02:23.020643   13524 fix.go:112] recreateIfNeeded on functional-002200: state=Running err=<nil>
	W1216 05:02:23.020643   13524 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:02:23.024655   13524 out.go:252] * Updating the running docker "functional-002200" container ...
	I1216 05:02:23.024655   13524 machine.go:94] provisionDockerMachine start ...
	I1216 05:02:23.028059   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.089226   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.089720   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.089720   13524 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:02:23.263587   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 05:02:23.263587   13524 ubuntu.go:182] provisioning hostname "functional-002200"
	I1216 05:02:23.269095   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.343706   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.344098   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.344098   13524 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-002200 && echo "functional-002200" | sudo tee /etc/hostname
	I1216 05:02:23.523871   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 05:02:23.527605   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.582373   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.582799   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.582799   13524 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-002200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-002200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-002200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:02:23.744731   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:02:23.744781   13524 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 05:02:23.744810   13524 ubuntu.go:190] setting up certificates
	I1216 05:02:23.744810   13524 provision.go:84] configureAuth start
	I1216 05:02:23.748413   13524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 05:02:23.805299   13524 provision.go:143] copyHostCerts
	I1216 05:02:23.805299   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 05:02:23.805299   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 05:02:23.805870   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 05:02:23.806787   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 05:02:23.806813   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 05:02:23.806957   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 05:02:23.807512   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 05:02:23.807512   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 05:02:23.807512   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 05:02:23.808114   13524 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-002200 san=[127.0.0.1 192.168.49.2 functional-002200 localhost minikube]
	I1216 05:02:24.024499   13524 provision.go:177] copyRemoteCerts
	I1216 05:02:24.027499   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:02:24.030499   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.084455   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:24.207064   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 05:02:24.231047   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 05:02:24.253218   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:02:24.278696   13524 provision.go:87] duration metric: took 533.8823ms to configureAuth
	I1216 05:02:24.278696   13524 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:02:24.279294   13524 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:02:24.283136   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.338661   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.338661   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.338661   13524 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 05:02:24.501259   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 05:02:24.501259   13524 ubuntu.go:71] root file system type: overlay
	I1216 05:02:24.503332   13524 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 05:02:24.506757   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.561628   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.562204   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.562204   13524 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 05:02:24.732222   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 05:02:24.736823   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.789603   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.790705   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.790705   13524 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 05:02:24.956843   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:02:24.956843   13524 machine.go:97] duration metric: took 1.9321739s to provisionDockerMachine
	I1216 05:02:24.956843   13524 start.go:293] postStartSetup for "functional-002200" (driver="docker")
	I1216 05:02:24.956843   13524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:02:24.961328   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:02:24.963780   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.018396   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.151694   13524 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:02:25.159738   13524 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:02:25.159738   13524 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:02:25.159738   13524 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 05:02:25.159738   13524 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 05:02:25.160372   13524 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 05:02:25.161048   13524 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> hosts in /etc/test/nested/copy/11704
	I1216 05:02:25.165137   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11704
	I1216 05:02:25.176929   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 05:02:25.202240   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts --> /etc/test/nested/copy/11704/hosts (40 bytes)
	I1216 05:02:25.226560   13524 start.go:296] duration metric: took 269.6889ms for postStartSetup
	I1216 05:02:25.230465   13524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:02:25.232786   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.287361   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.409366   13524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:02:25.419299   13524 fix.go:56] duration metric: took 2.4613371s for fixHost
	I1216 05:02:25.419299   13524 start.go:83] releasing machines lock for "functional-002200", held for 2.4613371s
	I1216 05:02:25.423876   13524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 05:02:25.479590   13524 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 05:02:25.483988   13524 ssh_runner.go:195] Run: cat /version.json
	I1216 05:02:25.483988   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.487582   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.542893   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.550987   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	W1216 05:02:25.660611   13524 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 05:02:25.682804   13524 ssh_runner.go:195] Run: systemctl --version
	I1216 05:02:25.696301   13524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:02:25.703847   13524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:02:25.708899   13524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:02:25.720784   13524 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:02:25.720820   13524 start.go:496] detecting cgroup driver to use...
	I1216 05:02:25.720861   13524 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 05:02:25.720884   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:02:25.746032   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1216 05:02:25.756672   13524 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 05:02:25.756737   13524 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 05:02:25.764577   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 05:02:25.778652   13524 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 05:02:25.782944   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 05:02:25.802561   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 05:02:25.822362   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 05:02:25.841368   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 05:02:25.860152   13524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:02:25.878804   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 05:02:25.897721   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 05:02:25.916509   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 05:02:25.935848   13524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:02:25.954408   13524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:02:25.972671   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:26.135013   13524 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 05:02:26.286857   13524 start.go:496] detecting cgroup driver to use...
	I1216 05:02:26.286857   13524 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 05:02:26.291710   13524 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 05:02:26.313739   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:02:26.335410   13524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:02:26.394402   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:02:26.416456   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 05:02:26.433425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:02:26.458250   13524 ssh_runner.go:195] Run: which cri-dockerd
	I1216 05:02:26.469192   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 05:02:26.479991   13524 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 05:02:26.508331   13524 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 05:02:26.653923   13524 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 05:02:26.807509   13524 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 05:02:26.808040   13524 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 05:02:26.830421   13524 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 05:02:26.853437   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:26.993507   13524 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 05:02:27.802449   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:02:27.823963   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 05:02:27.846489   13524 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 05:02:27.872589   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 05:02:27.893632   13524 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 05:02:28.032388   13524 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 05:02:28.173426   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:28.303647   13524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 05:02:28.327061   13524 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 05:02:28.347849   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:28.515228   13524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 05:02:28.617223   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 05:02:28.634479   13524 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 05:02:28.638575   13524 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 05:02:28.646251   13524 start.go:564] Will wait 60s for crictl version
	I1216 05:02:28.650257   13524 ssh_runner.go:195] Run: which crictl
	I1216 05:02:28.663129   13524 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:02:28.707678   13524 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 05:02:28.711140   13524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 05:02:28.754899   13524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 05:02:28.798065   13524 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 05:02:28.801328   13524 cli_runner.go:164] Run: docker exec -t functional-002200 dig +short host.docker.internal
	I1216 05:02:28.928679   13524 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 05:02:28.933317   13524 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 05:02:28.945787   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:29.006099   13524 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 05:02:29.009213   13524 kubeadm.go:884] updating cluster {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:02:29.009213   13524 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 05:02:29.012544   13524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 05:02:29.044964   13524 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-002200
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1216 05:02:29.045018   13524 docker.go:621] Images already preloaded, skipping extraction
	I1216 05:02:29.050176   13524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 05:02:29.078871   13524 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-002200
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1216 05:02:29.078871   13524 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:02:29.078871   13524 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1216 05:02:29.078871   13524 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-002200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:02:29.083733   13524 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 05:02:29.153386   13524 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 05:02:29.153441   13524 cni.go:84] Creating CNI manager for ""
	I1216 05:02:29.153441   13524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 05:02:29.153441   13524 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:02:29.153497   13524 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-002200 NodeName:functional-002200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:02:29.153740   13524 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-002200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:02:29.159735   13524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:02:29.170652   13524 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:02:29.175184   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:02:29.187845   13524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 05:02:29.208540   13524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:02:29.226431   13524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1216 05:02:29.250294   13524 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:02:29.261010   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:29.404128   13524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:02:30.007557   13524 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200 for IP: 192.168.49.2
	I1216 05:02:30.007557   13524 certs.go:195] generating shared ca certs ...
	I1216 05:02:30.007557   13524 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:02:30.008172   13524 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 05:02:30.008172   13524 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 05:02:30.008887   13524 certs.go:257] generating profile certs ...
	I1216 05:02:30.013750   13524 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key
	I1216 05:02:30.014359   13524 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742
	I1216 05:02:30.014359   13524 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key
	I1216 05:02:30.014952   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 05:02:30.015510   13524 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 05:02:30.016106   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 05:02:30.016106   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 05:02:30.017231   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:02:30.047196   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 05:02:30.070848   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:02:30.096702   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 05:02:30.121970   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 05:02:30.146884   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:02:30.173170   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:02:30.199629   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:02:30.226778   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 05:02:30.250105   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:02:30.272968   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 05:02:30.298291   13524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:02:30.318635   13524 ssh_runner.go:195] Run: openssl version
	I1216 05:02:30.332668   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.355358   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 05:02:30.372181   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.379909   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.384371   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.432373   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:02:30.447662   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.464870   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:02:30.481196   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.489322   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.492995   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.540388   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:02:30.558567   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.574821   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 05:02:30.592525   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.598815   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.603416   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.650141   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:02:30.666001   13524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:02:30.677986   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:02:30.724950   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:02:30.775114   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:02:30.821700   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:02:30.868594   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:02:30.916597   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:02:30.959171   13524 kubeadm.go:401] StartCluster: {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:30.963942   13524 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 05:02:30.994317   13524 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:02:31.005043   13524 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:02:31.005043   13524 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:02:31.009827   13524 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:02:31.023534   13524 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.026842   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:31.080676   13524 kubeconfig.go:125] found "functional-002200" server: "https://127.0.0.1:49316"
	I1216 05:02:31.087667   13524 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:02:31.101385   13524 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 04:45:52.574738576 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 05:02:29.239240136 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 05:02:31.101385   13524 kubeadm.go:1161] stopping kube-system containers ...
	I1216 05:02:31.105991   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 05:02:31.137859   13524 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 05:02:31.162569   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:02:31.173570   13524 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 04:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 04:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 16 04:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 04:49 /etc/kubernetes/scheduler.conf
	
	I1216 05:02:31.178070   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:02:31.193447   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:02:31.204464   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.208708   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:02:31.223814   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:02:31.236112   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.240050   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:02:31.256323   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:02:31.270390   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.274655   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:02:31.291834   13524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:02:31.309287   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.373785   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.743926   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.973968   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:32.044614   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:32.128503   13524 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:02:32.133080   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:32.634591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:33.135532   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:33.633951   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:34.133670   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:34.636362   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:35.133362   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:35.634567   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:36.133378   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:36.634652   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:37.133364   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:37.635212   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:38.133996   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:38.634136   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:39.133538   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:39.634806   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:40.133591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:40.633797   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:41.133611   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:41.634039   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:42.133614   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:42.634568   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:43.134027   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:43.634254   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:44.133984   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:44.634389   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:45.133761   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:45.634255   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:46.134409   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:46.634402   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:47.133336   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:47.634728   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:48.133723   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:48.634056   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:49.133313   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:49.634057   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:50.134418   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:50.633737   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:51.133246   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:51.634053   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:52.134086   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:52.633592   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:53.134909   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:53.633883   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:54.133900   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:54.633980   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:55.133861   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:55.634905   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:56.133623   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:56.633940   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:57.133423   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:57.635127   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:58.133876   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:58.634340   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:59.133894   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:59.633621   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:00.136295   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:00.633723   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:01.133850   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:01.630633   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:02.135818   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:02.635548   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:03.134173   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:03.634568   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:04.133911   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:04.634440   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:05.133383   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:05.633913   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:06.133618   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:06.635004   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:07.133967   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:07.634270   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:08.133741   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:08.633647   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:09.134149   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:09.634014   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:10.133536   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:10.633733   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:11.134705   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:11.634320   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:12.134680   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:12.634430   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:13.134597   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:13.634710   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:14.134733   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:14.634512   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:15.134218   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:15.633594   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:16.134090   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:16.634446   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:17.134183   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:17.634400   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:18.134566   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:18.633972   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:19.134271   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:19.634238   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:20.134883   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:20.634468   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:21.134017   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:21.634112   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:22.135187   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:22.634480   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:23.134672   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:23.633614   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:24.134339   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:24.634245   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:25.135181   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:25.634475   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:26.134348   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:26.634151   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:27.133880   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:27.633366   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:28.133826   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:28.634409   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:29.133350   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:29.633502   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:30.134183   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:30.633644   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:31.133961   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:31.634081   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:32.132156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:32.161948   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.161948   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:32.165532   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:32.190451   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.190451   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:32.194000   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:32.221132   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.221201   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:32.224735   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:32.251199   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.251265   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:32.254803   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:32.285399   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.285399   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:32.288927   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:32.316407   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.316407   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:32.320399   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:32.348258   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.348330   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:32.348330   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:32.348330   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:32.391508   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:32.391508   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:32.457156   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:32.457156   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:32.517211   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:32.517211   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:32.547816   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:32.547816   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:32.628349   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:32.619255   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.620222   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.621844   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.623691   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.625391   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:32.619255   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.620222   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.621844   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.623691   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.625391   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:35.133793   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:35.155411   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:35.187090   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.187090   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:35.190727   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:35.222945   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.223013   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:35.226777   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:35.253910   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.253910   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:35.257543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:35.284715   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.284715   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:35.288228   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:35.317179   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.317179   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:35.320898   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:35.347702   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.347702   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:35.351146   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:35.380831   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.380865   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:35.380865   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:35.380894   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:35.460624   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:35.451664   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.452907   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.454000   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455064   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455938   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:35.451664   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.452907   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.454000   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455064   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455938   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:35.460624   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:35.460624   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:35.503284   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:35.503284   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:35.556840   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:35.556840   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:35.619567   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:35.619567   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:38.155257   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:38.180004   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:38.207932   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.207932   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:38.211988   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:38.240313   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.240313   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:38.243787   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:38.271584   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.271584   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:38.275398   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:38.302890   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.302890   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:38.308028   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:38.334217   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.334217   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:38.338421   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:38.366179   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.366179   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:38.370864   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:38.399763   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.399763   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:38.399763   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:38.399763   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:38.427010   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:38.427010   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:38.520678   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:38.510609   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.512076   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.514160   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.516365   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.517345   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:38.510609   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.512076   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.514160   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.516365   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.517345   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:38.520678   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:38.520678   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:38.565076   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:38.565076   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:38.618166   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:38.618166   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:41.184770   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:41.209166   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:41.236776   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.236853   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:41.240392   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:41.270413   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.270413   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:41.274447   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:41.299898   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.299898   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:41.303698   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:41.331395   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.331395   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:41.335559   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:41.360930   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.360930   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:41.364502   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:41.391119   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.391119   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:41.394804   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:41.421862   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.421862   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:41.421862   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:41.421862   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:41.485064   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:41.485064   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:41.515166   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:41.515166   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:41.602242   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:41.591320   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.592209   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.596556   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.598648   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.599664   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:41.591320   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.592209   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.596556   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.598648   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.599664   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:41.602283   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:41.602283   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:41.643359   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:41.643359   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:44.196285   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:44.218200   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:44.246503   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.246585   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:44.251156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:44.281646   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.281711   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:44.285404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:44.314582   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.314582   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:44.318424   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:44.345658   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.345658   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:44.349423   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:44.378211   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.378272   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:44.381956   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:44.410544   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.410544   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:44.414620   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:44.445500   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.445500   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:44.445500   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:44.445500   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:44.507872   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:44.507872   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:44.538767   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:44.538767   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:44.622136   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:44.612744   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.613558   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.618132   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.619496   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.620483   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:44.612744   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.613558   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.618132   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.619496   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.620483   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:44.622136   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:44.622136   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:44.663418   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:44.663418   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:47.212335   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:47.235078   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:47.263884   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.263884   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:47.267298   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:47.296349   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.296349   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:47.300145   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:47.328463   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.328463   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:47.332047   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:47.360277   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.360277   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:47.365253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:47.394405   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.394405   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:47.398327   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:47.424342   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.424342   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:47.427553   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:47.457407   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.457407   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:47.457407   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:47.457482   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:47.518376   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:47.518376   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:47.549518   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:47.549518   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:47.633807   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:47.621666   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.623404   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.625321   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.626276   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.627968   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:47.621666   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.623404   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.625321   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.626276   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.627968   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:47.633807   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:47.633807   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:47.677347   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:47.677347   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:50.228661   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:50.251356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:50.280242   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.280242   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:50.284021   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:50.312131   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.312131   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:50.316156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:50.345649   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.345649   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:50.349420   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:50.378641   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.378641   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:50.382647   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:50.412461   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.412461   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:50.416175   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:50.442845   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.442845   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:50.446814   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:50.475928   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.475928   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:50.475928   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:50.475928   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:50.557550   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:50.546013   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.546957   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.548058   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550001   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550942   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:50.546013   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.546957   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.548058   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550001   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550942   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:50.557550   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:50.557550   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:50.598249   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:50.599249   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:50.649236   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:50.649236   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:50.708474   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:50.708474   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:53.243724   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:53.265421   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:53.296102   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.296102   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:53.299979   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:53.326976   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.326976   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:53.330578   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:53.359456   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.359456   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:53.363072   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:53.390071   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.390071   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:53.393691   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:53.420871   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.420871   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:53.424512   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:53.453800   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.453800   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:53.457145   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:53.484517   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.484517   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:53.484517   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:53.484517   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:53.528040   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:53.528040   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:53.587553   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:53.587553   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:53.617548   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:53.617548   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:53.700026   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:53.688408   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.689532   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.690618   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692085   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692939   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:53.688408   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.689532   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.690618   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692085   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692939   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:53.700026   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:53.700026   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:56.246963   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:56.268638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:56.299094   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.299094   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:56.302639   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:56.332517   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.332560   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:56.336308   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:56.365426   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.365426   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:56.369138   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:56.397544   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.397619   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:56.401112   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:56.429549   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.429549   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:56.433429   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:56.460742   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.460742   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:56.464610   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:56.491304   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.491304   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:56.491304   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:56.491304   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:56.537801   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:56.537801   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:56.596883   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:56.596883   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:56.627551   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:56.627551   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:56.716773   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:56.704143   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.705738   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.709298   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.710507   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.711360   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:56.704143   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.705738   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.709298   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.710507   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.711360   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:56.716773   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:56.716773   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:59.265591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:59.287053   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:59.314567   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.314567   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:59.318471   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:59.344778   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.344778   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:59.348198   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:59.377352   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.377352   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:59.381355   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:59.409757   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.409757   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:59.413264   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:59.442030   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.442030   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:59.447566   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:59.476800   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.476800   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:59.480486   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:59.510562   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.510562   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:59.510562   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:59.510562   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:59.594557   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:59.583933   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.585461   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.586898   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.588167   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.589054   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:59.583933   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.585461   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.586898   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.588167   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.589054   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:59.594557   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:59.594557   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:59.635862   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:59.635862   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:59.680837   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:59.680837   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:59.742598   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:59.742598   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:02.276919   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:02.299620   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:02.328580   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.328580   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:02.332001   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:02.362532   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.362532   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:02.367709   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:02.398639   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.398639   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:02.402478   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:02.429515   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.429515   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:02.434024   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:02.462711   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.462771   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:02.465977   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:02.496760   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.496760   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:02.500343   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:02.528038   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.528082   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:02.528082   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:02.528117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:02.591712   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:02.591712   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:02.621318   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:02.621318   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:02.725138   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:02.714257   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.715298   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.717709   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.718396   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.720971   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:02.714257   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.715298   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.717709   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.718396   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.720971   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:02.725138   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:02.725138   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:02.765954   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:02.765954   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:05.326035   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:05.347411   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:05.372745   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.372745   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:05.376358   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:05.403930   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.403930   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:05.406957   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:05.437512   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.437512   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:05.441038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:05.468927   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.468973   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:05.472507   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:05.499239   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.499239   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:05.503303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:05.529451   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.529512   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:05.533654   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:05.561652   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.561652   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:05.561652   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:05.561652   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:05.604232   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:05.604232   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:05.656685   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:05.656714   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:05.718388   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:05.718388   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:05.748808   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:05.748808   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:05.832901   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:05.823709   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.825763   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.826966   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.828014   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.829256   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:05.823709   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.825763   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.826966   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.828014   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.829256   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:08.338915   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:08.361157   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:08.392451   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.392451   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:08.396684   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:08.423351   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.423351   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:08.429970   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:08.457365   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.457365   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:08.460969   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:08.489550   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.489550   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:08.492908   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:08.522740   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.522740   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:08.526558   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:08.555230   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.555230   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:08.558834   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:08.588132   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.588132   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:08.588132   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:08.588132   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:08.648570   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:08.648570   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:08.679084   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:08.679117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:08.767825   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:08.758330   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.759809   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.761174   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763021   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763841   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:08.758330   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.759809   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.761174   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763021   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763841   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:08.767825   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:08.767825   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:08.813493   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:08.813493   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:11.371323   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:11.393671   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:11.423912   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.423912   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:11.426874   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:11.457321   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.457321   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:11.460999   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:11.491719   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.491742   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:11.495112   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:11.524188   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.524188   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:11.530312   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:11.558213   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.558213   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:11.562148   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:11.587695   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.587695   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:11.591166   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:11.618568   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.618568   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:11.618568   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:11.618568   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:11.700342   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:11.691304   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.692191   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.695157   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.697226   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.698318   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:11.691304   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.692191   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.695157   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.697226   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.698318   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:11.700342   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:11.700342   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:11.741856   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:11.741856   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:11.788648   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:11.788648   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:11.849193   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:11.849193   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:14.383220   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:14.404569   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:14.434777   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.434777   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:14.438799   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:14.466806   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.466806   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:14.470274   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:14.496413   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.496413   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:14.500050   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:14.531727   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.531727   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:14.535294   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:14.563393   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.563393   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:14.567315   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:14.592541   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.592541   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:14.596104   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:14.628287   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.628287   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:14.628287   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:14.628287   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:14.692122   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:14.692122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:14.720935   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:14.720935   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:14.809952   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:14.799452   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.800203   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.804585   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.805531   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.807780   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:14.799452   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.800203   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.804585   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.805531   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.807780   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:14.809952   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:14.809952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:14.853842   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:14.853842   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:17.408509   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:17.431899   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:17.459863   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.459863   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:17.463546   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:17.489686   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.489686   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:17.493208   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:17.521484   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.521484   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:17.525013   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:17.552847   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.552847   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:17.556723   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:17.583677   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.583677   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:17.587267   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:17.613916   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.613916   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:17.617383   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:17.649827   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.649827   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:17.649827   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:17.649827   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:17.697170   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:17.697170   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:17.754919   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:17.754919   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:17.784122   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:17.784122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:17.864432   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:17.854159   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.855168   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.856276   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.857030   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.859249   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:17.854159   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.855168   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.856276   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.857030   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.859249   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:17.864463   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:17.864463   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:20.414214   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:20.438174   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:20.468253   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.468253   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:20.471621   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:20.500056   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.500056   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:20.503669   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:20.535901   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.535901   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:20.539210   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:20.566366   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.566366   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:20.570012   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:20.599351   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.599351   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:20.603383   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:20.629474   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.629474   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:20.636460   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:20.662795   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.662795   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:20.662795   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:20.662795   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:20.723615   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:20.723615   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:20.752636   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:20.752636   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:20.837861   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:20.826007   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.827210   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.829865   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.831983   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.832937   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:20.826007   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.827210   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.829865   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.831983   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.832937   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:20.837861   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:20.837861   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:20.879492   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:20.879492   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:23.436591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:23.459603   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:23.484610   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.485910   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:23.489800   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:23.516517   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.516517   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:23.520034   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:23.549815   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.549815   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:23.553056   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:23.583026   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.583026   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:23.586920   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:23.615403   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.615403   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:23.618776   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:23.647271   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.647271   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:23.650983   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:23.677461   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.677520   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:23.677520   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:23.677559   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:23.743913   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:23.743913   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:23.773462   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:23.773462   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:23.862441   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:23.853159   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.854280   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.855338   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.856368   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.857459   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:23.853159   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.854280   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.855338   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.856368   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.857459   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:23.862502   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:23.862526   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:23.903963   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:23.903963   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:26.456802   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:26.479694   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:26.507859   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.507859   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:26.511781   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:26.537683   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.537683   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:26.541445   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:26.569611   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.569611   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:26.573478   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:26.604349   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.604377   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:26.609300   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:26.638784   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.638784   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:26.641986   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:26.669720   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.669720   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:26.673932   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:26.700387   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.700387   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:26.700387   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:26.700387   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:26.766000   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:26.766000   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:26.796095   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:26.796095   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:26.882695   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:26.871861   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.872835   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.874610   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.876128   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.877304   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:26.871861   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.872835   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.874610   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.876128   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.877304   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:26.882695   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:26.882695   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:26.924768   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:26.924768   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:29.478546   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:29.499904   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:29.527110   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.527110   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:29.531186   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:29.558221   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.558221   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:29.561810   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:29.591838   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.591838   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:29.596165   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:29.623642   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.623642   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:29.627192   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:29.652493   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.652526   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:29.655375   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:29.682914   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.682957   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:29.686351   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:29.714123   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.714123   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:29.714123   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:29.714123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:29.774899   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:29.774899   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:29.802342   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:29.802342   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:29.885111   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:29.875757   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.876923   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.877963   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.879017   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.880228   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:29.875757   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.876923   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.877963   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.879017   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.880228   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:29.885242   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:29.885242   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:29.926184   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:29.926184   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:32.480583   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:32.502826   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:32.533439   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.533463   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:32.537047   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:32.564845   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.564845   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:32.568203   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:32.595465   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.595526   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:32.598404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:32.626657   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.626657   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:32.630597   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:32.656354   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.656354   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:32.660989   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:32.690899   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.690920   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:32.693919   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:32.721353   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.721353   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:32.721353   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:32.721353   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:32.783967   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:32.783967   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:32.813914   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:32.813914   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:32.893277   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:32.884279   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.884964   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.887572   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.888527   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.889778   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:32.884279   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.884964   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.887572   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.888527   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.889778   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:32.893277   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:32.893277   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:32.936887   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:32.936887   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:35.508248   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:35.532690   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:35.562568   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.562568   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:35.566845   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:35.593817   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.593817   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:35.597629   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:35.626272   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.626272   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:35.629313   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:35.660523   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.660523   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:35.664731   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:35.696512   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.696512   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:35.699886   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:35.730008   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.730008   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:35.733873   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:35.759351   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.759351   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:35.760366   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:35.760366   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:35.805169   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:35.805169   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:35.871943   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:35.871943   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:35.902094   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:35.902094   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:35.984144   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:35.975517   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.976548   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.977611   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.978767   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.980051   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:35.975517   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.976548   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.977611   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.978767   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.980051   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:35.984671   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:35.984671   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:38.532401   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:38.553975   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:38.587094   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.587163   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:38.590542   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:38.615078   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.615078   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:38.620176   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:38.646601   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.646601   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:38.649820   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:38.678850   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.678850   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:38.681929   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:38.708321   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.708380   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:38.711681   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:38.740769   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.740859   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:38.744600   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:38.773706   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.773706   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:38.773706   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:38.773706   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:38.802001   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:38.802997   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:38.884848   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:38.877013   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.878352   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.879473   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.880593   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.881944   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:38.877013   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.878352   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.879473   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.880593   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.881944   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:38.884848   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:38.884848   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:38.927525   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:38.927525   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:38.973952   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:38.973952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:41.541093   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:41.564290   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:41.592889   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.592889   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:41.597074   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:41.626087   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.626087   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:41.630076   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:41.656581   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.656581   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:41.660739   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:41.689073   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.689073   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:41.692998   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:41.718767   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.718767   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:41.722605   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:41.750884   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.750884   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:41.754652   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:41.780815   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.780815   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:41.780815   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:41.780815   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:41.872864   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:41.862126   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.863102   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.867559   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.868025   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.869518   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:41.862126   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.863102   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.867559   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.868025   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.869518   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:41.872864   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:41.872864   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:41.911229   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:41.911229   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:41.958721   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:41.958721   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:42.017563   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:42.017563   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:44.553294   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:44.576740   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:44.607009   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.607009   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:44.610623   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:44.635971   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.635971   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:44.639338   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:44.664675   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.664675   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:44.667916   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:44.696295   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.696329   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:44.700356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:44.727661   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.727661   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:44.731273   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:44.759144   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.759174   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:44.762982   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:44.790033   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.790033   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:44.790080   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:44.790080   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:44.817221   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:44.817221   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:44.896592   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:44.887275   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.888226   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.890805   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.892527   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.894299   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:44.887275   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.888226   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.890805   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.892527   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.894299   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:44.896592   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:44.896592   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:44.940361   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:44.940361   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:44.989348   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:44.989348   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:47.553461   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:47.576347   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:47.606540   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.606602   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:47.610221   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:47.637575   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.637634   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:47.640884   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:47.669743   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.669743   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:47.673137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:47.702380   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.702380   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:47.706154   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:47.732891   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.732891   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:47.736068   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:47.765439   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.765464   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:47.769425   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:47.799223   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.799223   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:47.799223   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:47.799223   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:47.845720   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:47.846247   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:47.903222   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:47.903222   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:47.932986   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:47.933995   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:48.016069   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:48.005024   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.005860   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.008285   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.009577   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.010646   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:48.005024   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.005860   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.008285   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.009577   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.010646   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:48.016069   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:48.016069   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:50.561698   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:50.585162   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:50.615237   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.615237   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:50.618917   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:50.647113   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.647141   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:50.650625   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:50.677020   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.677020   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:50.680813   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:50.708471   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.708495   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:50.712156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:50.739340   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.739340   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:50.744296   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:50.773916   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.773916   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:50.778432   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:50.806364   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.806443   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:50.806443   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:50.806443   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:50.833814   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:50.833814   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:50.931229   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:50.917758   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.919179   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.923691   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.924605   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.925814   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:50.917758   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.919179   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.923691   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.924605   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.925814   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:50.931285   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:50.931285   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:50.973466   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:50.973466   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:51.020564   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:51.020564   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:53.590321   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:53.613378   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:53.645084   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.645084   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:53.648887   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:53.675145   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.675145   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:53.678830   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:53.704801   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.704801   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:53.708956   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:53.735945   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.736019   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:53.740579   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:53.766771   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.766771   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:53.771626   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:53.799949   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.799949   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:53.804011   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:53.831885   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.831885   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:53.831944   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:53.831944   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:53.878883   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:53.878883   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:53.941915   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:53.941915   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:53.971778   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:53.971778   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:54.047386   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:54.036815   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038092   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038978   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.040350   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.041669   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:54.036815   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038092   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038978   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.040350   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.041669   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:54.047386   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:54.047386   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:56.597206   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:56.623446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:56.654753   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.654783   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:56.657638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:56.687889   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.687889   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:56.691181   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:56.718606   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.718677   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:56.722343   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:56.748289   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.748289   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:56.752614   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:56.782030   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.782030   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:56.785674   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:56.813229   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.813229   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:56.817199   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:56.848354   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.848354   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:56.848354   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:56.848354   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:56.920172   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:56.920172   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:56.950025   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:56.950025   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:57.027703   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:57.017393   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.018120   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.020276   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.021295   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.022786   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:57.017393   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.018120   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.020276   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.021295   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.022786   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:57.027703   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:57.027703   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:57.067904   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:57.067904   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:59.623468   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:59.644700   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:59.675762   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.675762   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:59.679255   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:59.710350   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.710350   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:59.714080   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:59.743398   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.743398   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:59.747303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:59.777836   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.777836   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:59.781321   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:59.806990   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.806990   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:59.811081   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:59.839112   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.839112   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:59.842923   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:59.870519   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.870519   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:59.870519   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:59.870519   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:59.931436   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:59.931436   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:59.961074   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:59.961074   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:00.046620   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:00.036147   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.037355   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.038578   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.039491   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.042183   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:00.036147   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.037355   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.038578   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.039491   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.042183   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:00.046620   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:00.046620   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:00.087812   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:00.087812   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:02.639801   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:02.661744   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:02.693879   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.693879   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:02.697168   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:02.724574   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.724623   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:02.728234   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:02.756463   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.756463   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:02.760215   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:02.785297   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.785297   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:02.789630   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:02.815967   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.815967   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:02.820071   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:02.846212   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.846212   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:02.849605   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:02.880460   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.880501   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:02.880501   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:02.880501   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:02.942651   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:02.942651   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:02.973117   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:02.973117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:03.055647   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:03.045630   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.046516   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.048690   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.049939   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.051104   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:03.045630   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.046516   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.048690   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.049939   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.051104   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:03.055647   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:03.055647   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:03.097391   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:03.097391   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:05.655285   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:05.681408   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:05.711017   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.711017   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:05.714391   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:05.744313   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.744382   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:05.748472   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:05.778641   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.778641   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:05.782574   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:05.808201   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.808201   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:05.811215   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:05.845094   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.845094   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:05.849400   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:05.889250   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.889250   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:05.892728   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:05.921657   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.921657   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:05.921657   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:05.921657   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:05.983252   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:05.983252   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:06.013531   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:06.013531   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:06.094324   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:06.085481   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.087264   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.088438   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.089540   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.090612   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:06.085481   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.087264   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.088438   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.089540   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.090612   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:06.094324   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:06.094324   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:06.136404   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:06.136404   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:08.693146   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:08.716116   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:08.744861   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.744861   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:08.748618   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:08.778582   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.778582   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:08.782132   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:08.810955   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.810955   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:08.814794   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:08.844554   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.844554   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:08.848903   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:08.875472   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.875472   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:08.879360   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:08.907445   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.907445   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:08.911290   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:08.937114   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.937114   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:08.937114   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:08.937114   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:08.999016   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:08.999016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:09.029260   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:09.029260   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:09.117123   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:09.107890   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.109150   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.110216   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.111522   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.112791   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:09.107890   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.109150   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.110216   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.111522   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.112791   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:09.117123   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:09.117123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:09.158878   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:09.158878   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:11.716383   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:11.739574   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:11.772194   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.772194   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:11.776083   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:11.808831   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.808831   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:11.814900   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:11.843123   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.843123   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:11.847084   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:11.877406   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.877406   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:11.883404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:11.909497   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.909497   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:11.915877   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:11.941644   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.941644   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:11.947889   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:11.975058   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.975058   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:11.975058   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:11.975058   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:12.037229   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:12.037229   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:12.066794   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:12.066794   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:12.145714   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:12.137677   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.138809   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.139798   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.141019   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.142446   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:12.137677   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.138809   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.139798   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.141019   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.142446   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:12.145714   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:12.145752   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:12.189122   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:12.189122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:14.741253   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:14.764365   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:14.795995   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.795995   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:14.799654   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:14.827360   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.827360   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:14.830473   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:14.877262   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.877262   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:14.881028   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:14.907013   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.907013   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:14.910966   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:14.940012   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.940012   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:14.943533   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:14.973219   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.973219   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:14.977027   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:15.005016   13524 logs.go:282] 0 containers: []
	W1216 05:05:15.005016   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:15.005016   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:15.005016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:15.068144   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:15.068144   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:15.097979   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:15.097979   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:15.178592   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:15.170495   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.171184   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.173358   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.174428   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.175575   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:15.170495   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.171184   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.173358   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.174428   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.175575   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:15.178592   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:15.178592   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:15.226390   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:15.226390   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:17.780482   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:17.801597   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:17.829508   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.829533   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:17.833177   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:17.859642   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.859642   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:17.862985   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:17.890800   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.890800   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:17.893950   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:17.924358   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.924358   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:17.927717   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:17.953300   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.953300   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:17.957301   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:17.985802   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.985802   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:17.989495   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:18.016952   13524 logs.go:282] 0 containers: []
	W1216 05:05:18.016952   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:18.016952   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:18.016952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:18.106203   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:18.093536   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.094540   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.097011   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.098056   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.099323   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:18.093536   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.094540   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.097011   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.098056   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.099323   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:18.106203   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:18.106203   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:18.149655   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:18.149655   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:18.195681   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:18.195707   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:18.257349   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:18.257349   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:20.791461   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:20.812868   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:20.842707   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.842740   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:20.846536   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:20.875894   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.875894   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:20.879319   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:20.909010   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.909010   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:20.912866   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:20.941362   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.941362   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:20.945334   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:20.973226   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.973226   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:20.977453   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:21.004793   13524 logs.go:282] 0 containers: []
	W1216 05:05:21.004793   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:21.008493   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:21.034240   13524 logs.go:282] 0 containers: []
	W1216 05:05:21.034240   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:21.034240   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:21.034240   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:21.098331   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:21.098331   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:21.129173   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:21.129173   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:21.218614   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:21.206034   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.207338   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.209505   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.211860   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.213420   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:21.206034   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.207338   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.209505   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.211860   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.213420   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:21.218614   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:21.218614   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:21.261020   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:21.261020   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:23.818479   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:23.840022   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:23.873329   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.873385   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:23.877280   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:23.903358   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.903395   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:23.907325   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:23.934336   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.934336   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:23.938027   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:23.966398   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.966398   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:23.969989   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:23.996674   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.996674   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:24.000315   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:24.027001   13524 logs.go:282] 0 containers: []
	W1216 05:05:24.027001   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:24.030715   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:24.059648   13524 logs.go:282] 0 containers: []
	W1216 05:05:24.059648   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:24.059648   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:24.059648   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:24.120785   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:24.120785   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:24.155678   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:24.155678   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:24.234706   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:24.223173   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.224035   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.226157   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.227148   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.228146   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:24.223173   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.224035   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.226157   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.227148   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.228146   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:24.234706   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:24.234706   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:24.278016   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:24.278016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:26.831237   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:26.852827   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:26.880996   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.880996   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:26.884822   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:26.912292   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.912292   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:26.916020   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:26.941600   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.941623   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:26.945391   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:26.972003   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.972068   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:26.975790   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:27.003933   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.003933   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:27.007292   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:27.033829   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.033861   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:27.037496   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:27.065486   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.065486   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:27.065486   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:27.065486   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:27.129425   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:27.129425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:27.158980   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:27.158980   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:27.240946   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:27.230164   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.231001   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.233339   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.234319   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.235558   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:27.230164   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.231001   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.233339   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.234319   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.235558   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:27.240946   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:27.240946   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:27.282635   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:27.282635   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:29.835505   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:29.856873   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:29.887755   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.887755   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:29.891311   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:29.919341   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.919341   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:29.923153   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:29.949569   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.949569   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:29.953446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:29.982150   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.982217   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:29.985852   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:30.012079   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.012079   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:30.017875   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:30.044535   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.044597   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:30.048212   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:30.075190   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.075223   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:30.075223   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:30.075254   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:30.118411   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:30.118411   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:30.169092   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:30.169092   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:30.224666   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:30.224666   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:30.257052   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:30.257052   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:30.345423   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:30.334921   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.335618   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.339017   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.340268   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.341411   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:30.334921   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.335618   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.339017   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.340268   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.341411   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:32.850775   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:32.874038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:32.905193   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.905193   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:32.908688   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:32.935829   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.935829   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:32.939716   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:32.967717   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.967717   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:32.971291   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:32.997404   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.997452   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:33.001346   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:33.033845   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.033845   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:33.037379   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:33.065410   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.065410   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:33.070454   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:33.097202   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.097202   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:33.097202   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:33.097276   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:33.159607   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:33.159607   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:33.190136   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:33.190288   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:33.270012   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:33.258945   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.259847   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.262213   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.263220   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.265983   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:33.258945   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.259847   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.262213   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.263220   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.265983   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:33.270012   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:33.270012   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:33.313088   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:33.313088   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:35.881230   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:35.903303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:35.933399   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.933399   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:35.936917   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:35.963670   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.963670   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:35.967376   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:35.993260   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.993260   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:35.999083   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:36.022547   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.022547   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:36.026765   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:36.058006   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.058006   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:36.061823   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:36.090079   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.090079   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:36.096186   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:36.124272   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.124272   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:36.124343   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:36.124343   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:36.187477   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:36.187477   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:36.217944   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:36.217944   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:36.308580   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:36.295229   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.296002   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.301995   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.302833   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.305048   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:36.295229   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.296002   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.301995   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.302833   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.305048   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:36.308580   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:36.308580   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:36.350059   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:36.350059   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:38.904862   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:38.926217   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:38.956469   13524 logs.go:282] 0 containers: []
	W1216 05:05:38.956469   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:38.959962   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:38.986769   13524 logs.go:282] 0 containers: []
	W1216 05:05:38.986769   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:38.990008   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:39.018465   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.018465   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:39.021941   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:39.050244   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.050244   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:39.054097   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:39.080344   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.080344   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:39.084719   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:39.111908   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.111908   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:39.116234   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:39.145295   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.145295   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:39.145329   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:39.145329   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:39.190461   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:39.190461   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:39.250498   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:39.250498   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:39.281744   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:39.281744   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:39.360278   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:39.352154   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.353091   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.354283   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.355420   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.356645   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:39.352154   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.353091   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.354283   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.355420   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.356645   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:39.360278   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:39.360278   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:41.907417   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:41.930781   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:41.959028   13524 logs.go:282] 0 containers: []
	W1216 05:05:41.959028   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:41.962118   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:41.992218   13524 logs.go:282] 0 containers: []
	W1216 05:05:41.992218   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:41.995638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:42.022706   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.022706   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:42.025963   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:42.058549   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.058591   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:42.063102   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:42.092433   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.092433   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:42.096210   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:42.124136   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.124136   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:42.127883   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:42.157397   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.157397   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:42.157397   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:42.157397   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:42.208439   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:42.208439   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:42.271217   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:42.271217   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:42.299862   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:42.300836   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:42.380228   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:42.370908   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.371801   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.372982   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.375094   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.376194   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:42.370908   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.371801   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.372982   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.375094   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.376194   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:42.380228   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:42.380270   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:44.926983   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:44.949386   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:44.980885   13524 logs.go:282] 0 containers: []
	W1216 05:05:44.980885   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:44.984714   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:45.011775   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.011775   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:45.016515   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:45.044937   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.044937   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:45.048973   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:45.076493   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.076493   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:45.080322   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:45.107894   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.107894   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:45.111226   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:45.140033   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.140033   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:45.145613   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:45.173403   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.173403   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:45.173403   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:45.173403   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:45.234157   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:45.234157   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:45.263615   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:45.263615   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:45.340483   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:45.331453   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.332466   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.333768   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.334753   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.335717   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:45.331453   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.332466   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.333768   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.334753   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.335717   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:45.340483   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:45.340483   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:45.385573   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:45.385573   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:47.944179   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:47.965345   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:47.994755   13524 logs.go:282] 0 containers: []
	W1216 05:05:47.994755   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:47.997830   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:48.025155   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.025155   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:48.028458   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:48.056617   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.056617   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:48.060320   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:48.089066   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.089066   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:48.092698   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:48.121598   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.121628   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:48.125680   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:48.157191   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.157191   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:48.160973   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:48.188668   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.188668   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:48.188668   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:48.188668   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:48.244524   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:48.244524   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:48.275889   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:48.275889   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:48.367425   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:48.355136   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.356146   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.358362   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.360588   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.361743   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:48.355136   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.356146   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.358362   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.360588   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.361743   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:48.367425   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:48.367425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:48.406776   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:48.406776   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:50.963363   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:50.986681   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:51.017484   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.017484   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:51.021749   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:51.049184   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.049184   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:51.052784   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:51.083798   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.083798   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:51.087092   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:51.116150   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.116181   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:51.119540   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:51.148592   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.148592   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:51.152543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:51.182496   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.182496   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:51.186206   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:51.212397   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.212397   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:51.212397   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:51.212397   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:51.294464   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:51.283439   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.284417   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.286178   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.287320   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.289084   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:51.283439   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.284417   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.286178   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.287320   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.289084   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:51.294464   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:51.294464   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:51.336829   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:51.336829   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:51.385258   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:51.385258   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:51.444652   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:51.444652   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:53.980590   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:54.001769   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:54.030775   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.030775   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:54.034817   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:54.062359   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.062385   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:54.065740   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:54.093857   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.093857   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:54.097137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:54.127972   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.127972   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:54.131415   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:54.158859   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.158859   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:54.162622   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:54.192077   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.192077   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:54.195448   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:54.223226   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.223226   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:54.223226   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:54.223226   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:54.267495   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:54.268494   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:54.318458   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:54.318458   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:54.379319   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:54.379319   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:54.409390   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:54.409390   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:54.497343   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:54.486388   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.487502   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.488610   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.489914   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.490890   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:54.486388   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.487502   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.488610   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.489914   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.490890   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:57.001942   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:57.024505   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:57.051420   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.051420   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:57.055095   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:57.086650   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.086650   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:57.090451   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:57.116570   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.116570   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:57.119823   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:57.150064   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.150064   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:57.154328   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:57.180973   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.180973   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:57.185282   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:57.216597   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.216597   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:57.220216   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:57.246877   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.246877   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:57.246945   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:57.246945   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:57.308963   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:57.308963   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:57.340818   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:57.340818   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:57.440976   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:57.429668   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.430817   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.432070   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.433114   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.434207   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:57.429668   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.430817   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.432070   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.433114   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.434207   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:57.440976   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:57.440976   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:57.485863   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:57.485863   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:00.038815   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:00.060757   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:00.089849   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.089849   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:00.093819   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:00.121426   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.121426   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:00.127493   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:00.155063   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.155063   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:00.158469   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:00.186269   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.186269   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:00.191767   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:00.220680   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.220680   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:00.224397   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:00.251492   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.251492   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:00.255561   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:00.282084   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.282084   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:00.282084   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:00.282084   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:00.340687   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:00.340687   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:00.369302   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:00.369302   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:00.450456   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:00.439681   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.441111   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.443533   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.444882   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.446042   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:00.439681   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.441111   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.443533   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.444882   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.446042   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:00.450456   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:00.450456   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:00.494633   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:00.494633   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:03.047228   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:03.070414   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:03.100869   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.100869   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:03.106543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:03.133873   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.133873   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:03.137304   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:03.169605   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.169605   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:03.173548   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:03.203086   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.203086   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:03.206980   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:03.233903   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.233903   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:03.239541   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:03.269916   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.269940   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:03.273671   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:03.301055   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.301055   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:03.301055   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:03.301055   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:03.361314   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:03.361314   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:03.391207   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:03.391207   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:03.477457   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:03.467080   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.468297   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.470723   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.472023   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.473419   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:03.467080   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.468297   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.470723   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.472023   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.473419   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:03.477457   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:03.477457   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:03.517504   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:03.517504   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:06.085750   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:06.108609   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:06.136944   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.136944   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:06.141119   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:06.168680   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.168680   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:06.172752   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:06.201039   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.201039   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:06.204417   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:06.234173   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.234173   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:06.237313   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:06.268910   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.268910   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:06.272680   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:06.302995   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.303025   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:06.306434   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:06.343040   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.343040   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:06.343040   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:06.343040   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:06.404754   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:06.404754   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:06.438236   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:06.438236   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:06.533746   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:06.523818   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.524791   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.526159   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.527425   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.528623   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:06.523818   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.524791   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.526159   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.527425   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.528623   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:06.533746   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:06.533746   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:06.587048   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:06.587048   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:09.143712   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:09.167180   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:09.197847   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.197847   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:09.201143   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:09.231047   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.231047   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:09.234772   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:09.263936   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.263936   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:09.267839   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:09.293408   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.293408   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:09.297079   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:09.325926   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.325926   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:09.329675   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:09.354839   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.354839   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:09.358679   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:09.386294   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.386294   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:09.386294   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:09.386294   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:09.446046   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:09.446046   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:09.474123   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:09.474123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:09.570430   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:09.552344   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.553464   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.562467   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.564909   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.565822   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:09.552344   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.553464   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.562467   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.564909   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.565822   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:09.570430   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:09.570430   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:09.612996   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:09.612996   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:12.162991   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:12.185413   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:12.220706   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.220706   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:12.224471   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:12.252012   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.252085   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:12.255507   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:12.287146   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.287146   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:12.291350   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:12.322209   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.322209   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:12.326285   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:12.352463   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.352463   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:12.356344   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:12.384416   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.384445   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:12.388099   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:12.416249   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.416249   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:12.416249   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:12.416249   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:12.457279   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:12.457279   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:12.504035   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:12.504035   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:12.565073   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:12.565073   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:12.594834   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:12.594834   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:12.671197   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:12.662068   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.663058   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.664278   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.666376   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.667861   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:12.662068   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.663058   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.664278   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.666376   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.667861   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:15.176441   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:15.198949   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:15.228375   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.228375   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:15.232284   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:15.260859   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.260859   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:15.264596   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:15.289482   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.289482   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:15.293332   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:15.321841   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.321889   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:15.325366   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:15.355205   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.355205   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:15.359602   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:15.391155   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.391155   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:15.395288   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:15.422696   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.422696   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:15.422696   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:15.422696   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:15.509885   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:15.501731   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.502732   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.503898   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.505461   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.506268   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:15.501731   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.502732   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.503898   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.505461   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.506268   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:15.509885   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:15.509885   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:15.550722   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:15.550722   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:15.597215   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:15.598218   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:15.655170   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:15.655170   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:18.189600   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:18.214190   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:18.244833   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.244918   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:18.248323   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:18.274826   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.274826   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:18.278263   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:18.305755   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.305755   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:18.310038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:18.339762   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.339762   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:18.343253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:18.372235   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.372235   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:18.376253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:18.405785   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.405785   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:18.410335   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:18.436279   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.436279   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:18.436279   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:18.436279   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:18.477830   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:18.477830   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:18.533284   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:18.533302   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:18.592952   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:18.592952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:18.623173   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:18.623173   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:18.706158   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:18.695935   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.696872   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.699160   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.700510   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.701521   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:18.695935   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.696872   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.699160   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.700510   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.701521   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:21.211431   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:21.233375   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:21.263996   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.263996   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:21.267857   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:21.296614   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.296614   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:21.300408   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:21.327435   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.327435   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:21.331241   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:21.361684   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.361684   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:21.365531   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:21.393896   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.393896   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:21.397371   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:21.427885   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.427885   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:21.431500   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:21.459772   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.459772   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:21.459772   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:21.459772   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:21.522041   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:21.522041   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:21.550901   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:21.550901   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:21.638725   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:21.627307   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.628343   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.629090   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.631635   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.632400   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:21.627307   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.628343   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.629090   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.631635   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.632400   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:21.638725   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:21.638725   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:21.680001   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:21.680001   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:24.235731   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:24.258332   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:24.285838   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.285838   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:24.289583   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:24.320077   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.320077   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:24.323958   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:24.351529   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.351529   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:24.355109   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:24.382170   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.382170   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:24.385526   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:24.415016   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.415016   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:24.418742   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:24.446275   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.446275   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:24.449841   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:24.475953   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.475953   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:24.475953   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:24.475953   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:24.537960   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:24.537960   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:24.566319   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:24.566319   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:24.648912   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:24.639127   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.640216   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.641694   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.642989   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.643980   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:24.639127   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.640216   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.641694   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.642989   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.643980   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:24.648912   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:24.648912   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:24.689261   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:24.689261   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:27.244212   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:27.265843   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:27.291130   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.291130   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:27.295137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:27.321255   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.321255   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:27.324759   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:27.355906   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.355906   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:27.359611   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:27.386761   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.386761   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:27.390275   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:27.419553   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.419586   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:27.423093   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:27.451634   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.451634   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:27.455077   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:27.485799   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.485799   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:27.485799   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:27.485799   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:27.547830   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:27.547830   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:27.576915   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:27.576915   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:27.661056   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:27.651493   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.652444   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.653497   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.654928   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.656287   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:27.651493   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.652444   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.653497   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.654928   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.656287   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:27.661056   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:27.661056   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:27.700831   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:27.700831   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:30.249035   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:30.271093   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:30.299108   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.299188   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:30.302446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:30.332396   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.332482   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:30.338127   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:30.366185   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.366185   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:30.369711   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:30.400279   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.400279   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:30.404337   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:30.432897   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.432897   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:30.437025   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:30.465969   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.465969   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:30.470356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:30.499169   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.499169   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:30.499169   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:30.499169   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:30.557232   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:30.557232   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:30.584956   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:30.584956   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:30.671890   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:30.661473   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.662403   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.665095   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.667106   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.668354   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:30.661473   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.662403   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.665095   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.667106   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.668354   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:30.671890   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:30.671890   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:30.714351   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:30.714351   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:33.262234   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:33.280780   13524 kubeadm.go:602] duration metric: took 4m2.2739333s to restartPrimaryControlPlane
	W1216 05:06:33.280780   13524 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 05:06:33.285614   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 05:06:33.738970   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:33.760826   13524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:06:33.774044   13524 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:06:33.778124   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:06:33.790578   13524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:06:33.790578   13524 kubeadm.go:158] found existing configuration files:
	
	I1216 05:06:33.794570   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:06:33.806138   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:06:33.810590   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:06:33.828749   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:06:33.841712   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:06:33.846141   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:06:33.862218   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:06:33.872779   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:06:33.877830   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:06:33.893064   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:06:33.905212   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:06:33.909089   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:06:33.925766   13524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:06:34.031218   13524 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 05:06:34.116656   13524 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 05:06:34.211658   13524 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:10:35.264797   13524 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 05:10:35.264797   13524 kubeadm.go:319] 
	I1216 05:10:35.264797   13524 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 05:10:35.269807   13524 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:10:35.269807   13524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:10:35.269807   13524 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:10:35.269807   13524 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 05:10:35.269807   13524 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 05:10:35.270949   13524 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_INET: enabled
	I1216 05:10:35.271576   13524 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 05:10:35.272413   13524 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 05:10:35.272605   13524 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 05:10:35.273278   13524 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 05:10:35.273322   13524 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 05:10:35.273414   13524 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 05:10:35.273503   13524 kubeadm.go:319] OS: Linux
	I1216 05:10:35.273681   13524 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:10:35.273728   13524 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 05:10:35.273769   13524 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:10:35.273813   13524 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:10:35.273855   13524 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:10:35.273913   13524 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:10:35.274000   13524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:10:35.274584   13524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:10:35.274584   13524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:10:35.293047   13524 out.go:252]   - Generating certificates and keys ...
	I1216 05:10:35.293426   13524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:10:35.293599   13524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:10:35.293913   13524 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 05:10:35.294149   13524 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 05:10:35.294885   13524 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 05:10:35.294982   13524 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 05:10:35.295109   13524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:10:35.295195   13524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:10:35.295363   13524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:10:35.295447   13524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:10:35.295612   13524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:10:35.295735   13524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:10:35.295944   13524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:10:35.296070   13524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:10:35.299081   13524 out.go:252]   - Booting up control plane ...
	I1216 05:10:35.299081   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:10:35.300333   13524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:10:35.300908   13524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:10:35.300908   13524 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000864945s
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	I1216 05:10:35.300908   13524 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- The kubelet is not running
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	I1216 05:10:35.300908   13524 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	W1216 05:10:35.301920   13524 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000864945s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 05:10:35.307024   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 05:10:35.771515   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:10:35.789507   13524 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:10:35.793192   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:10:35.806790   13524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:10:35.806790   13524 kubeadm.go:158] found existing configuration files:
	
	I1216 05:10:35.811076   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:10:35.824674   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:10:35.830540   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:10:35.849846   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:10:35.864835   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:10:35.868716   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:10:35.884647   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:10:35.897559   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:10:35.901847   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:10:35.919926   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:10:35.932321   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:10:35.937201   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:10:35.958683   13524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:10:36.010883   13524 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:10:36.010883   13524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:10:36.157778   13524 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:10:36.157778   13524 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 05:10:36.157778   13524 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 05:10:36.158306   13524 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 05:10:36.158377   13524 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 05:10:36.158462   13524 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 05:10:36.158630   13524 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 05:10:36.158749   13524 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 05:10:36.158829   13524 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 05:10:36.158950   13524 kubeadm.go:319] CONFIG_INET: enabled
	I1216 05:10:36.159106   13524 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 05:10:36.159725   13524 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 05:10:36.159807   13524 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 05:10:36.159927   13524 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 05:10:36.160002   13524 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 05:10:36.160137   13524 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 05:10:36.160246   13524 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 05:10:36.160629   13524 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] OS: Linux
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:10:36.160977   13524 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:10:36.160977   13524 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:10:36.161060   13524 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:10:36.161119   13524 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:10:36.161119   13524 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:10:36.161172   13524 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 05:10:36.263883   13524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:10:36.264641   13524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:10:36.264641   13524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:10:36.285337   13524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:10:36.291241   13524 out.go:252]   - Generating certificates and keys ...
	I1216 05:10:36.291368   13524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:10:36.291473   13524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:10:36.291610   13524 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 05:10:36.292292   13524 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 05:10:36.292479   13524 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 05:10:36.292479   13524 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 05:10:36.292479   13524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:10:36.355551   13524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:10:36.426990   13524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:10:36.485556   13524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:10:36.680670   13524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:10:36.834763   13524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:10:36.835291   13524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:10:36.840606   13524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:10:36.844374   13524 out.go:252]   - Booting up control plane ...
	I1216 05:10:36.844573   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:10:36.844629   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:10:36.844629   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:10:36.865895   13524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:10:36.865895   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:10:37.021660   13524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:10:37.022023   13524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:14:36.995901   13524 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000744142s
	I1216 05:14:36.995988   13524 kubeadm.go:319] 
	I1216 05:14:36.996138   13524 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 05:14:36.996214   13524 kubeadm.go:319] 	- The kubelet is not running
	I1216 05:14:36.996375   13524 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 05:14:36.996375   13524 kubeadm.go:319] 
	I1216 05:14:36.996441   13524 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 05:14:36.996441   13524 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 05:14:36.996441   13524 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 05:14:36.996441   13524 kubeadm.go:319] 
	I1216 05:14:37.001376   13524 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 05:14:37.002575   13524 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 05:14:37.002650   13524 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:14:37.002650   13524 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 05:14:37.002650   13524 kubeadm.go:319] 
	I1216 05:14:37.003329   13524 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 05:14:37.003329   13524 kubeadm.go:403] duration metric: took 12m6.0383556s to StartCluster
	I1216 05:14:37.003329   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:14:37.007935   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:14:37.064773   13524 cri.go:89] found id: ""
	I1216 05:14:37.064773   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.064773   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:14:37.064773   13524 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:14:37.069487   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:14:37.111914   13524 cri.go:89] found id: ""
	I1216 05:14:37.111914   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.111914   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:14:37.111914   13524 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:14:37.116663   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:14:37.152644   13524 cri.go:89] found id: ""
	I1216 05:14:37.152667   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.152667   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:14:37.152667   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:14:37.157010   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:14:37.200196   13524 cri.go:89] found id: ""
	I1216 05:14:37.200196   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.200196   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:14:37.200268   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:14:37.204321   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:14:37.243623   13524 cri.go:89] found id: ""
	I1216 05:14:37.243623   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.243623   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:14:37.243623   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:14:37.248366   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:14:37.289277   13524 cri.go:89] found id: ""
	I1216 05:14:37.289277   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.289277   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:14:37.289277   13524 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:14:37.294034   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:14:37.333593   13524 cri.go:89] found id: ""
	I1216 05:14:37.333593   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.333593   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:14:37.333593   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:14:37.333593   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:14:37.417323   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:14:37.408448   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.410123   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.411118   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.412628   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.413931   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:14:37.408448   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.410123   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.411118   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.412628   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.413931   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:14:37.417323   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:14:37.417323   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:14:37.457412   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:14:37.457412   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:14:37.504416   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:14:37.504416   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:14:37.564994   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:14:37.564994   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 05:14:37.597706   13524 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 05:14:37.597706   13524 out.go:285] * 
	W1216 05:14:37.597706   13524 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:14:37.597706   13524 out.go:285] * 
	W1216 05:14:37.600079   13524 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:14:37.606140   13524 out.go:203] 
	W1216 05:14:37.609999   13524 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:14:37.610044   13524 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 05:14:37.610044   13524 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 05:14:37.613011   13524 out.go:203] 
	
	
	==> Docker <==
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685355275Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685360576Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685379878Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685414282Z" level=info msg="Initializing buildkit"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.786375434Z" level=info msg="Completed buildkit initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.794992212Z" level=info msg="Daemon has completed initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151030Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151330Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795240140Z" level=info msg="API listen on [::]:2376"
	Dec 16 05:02:27 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:27 functional-002200 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 05:02:28 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Loaded network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 05:02:28 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:14:39.438696   40246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:39.439923   40246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:39.443612   40246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:39.444539   40246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:39.446857   40246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000761] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000760] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000766] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000779] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000780] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 05:02] CPU: 4 PID: 64252 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001010] RIP: 0033:0x7fd1009fcb20
	[  +0.000612] Code: Unable to access opcode bytes at RIP 0x7fd1009fcaf6.
	[  +0.000841] RSP: 002b:00007ffd77dbf430 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000856] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000836] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000800] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000980] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000812] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.813920] CPU: 4 PID: 64374 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f21a7b6eb20
	[  +0.000430] Code: Unable to access opcode bytes at RIP 0x7f21a7b6eaf6.
	[  +0.000678] RSP: 002b:00007ffd5fd33ec0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000807] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000827] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000787] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000791] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000788] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:14:39 up 50 min,  0 user,  load average: 0.36, 0.32, 0.43
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:14:36 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:14:36 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 16 05:14:36 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:14:36 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:14:37 functional-002200 kubelet[39968]: E1216 05:14:37.011372   39968 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:14:37 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:14:37 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:14:37 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 05:14:37 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:14:37 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:14:37 functional-002200 kubelet[40097]: E1216 05:14:37.783684   40097 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:14:37 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:14:37 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:14:38 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 16 05:14:38 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:14:38 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:14:38 functional-002200 kubelet[40125]: E1216 05:14:38.511526   40125 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:14:38 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:14:38 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:14:39 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 16 05:14:39 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:14:39 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:14:39 functional-002200 kubelet[40187]: E1216 05:14:39.267286   40187 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:14:39 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:14:39 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (576.667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (739.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (53.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-002200 get po -l tier=control-plane -n kube-system -o=json
E1216 05:14:55.677980   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-002200 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (50.3462962s)

                                                
                                                
** stderr ** 
	E1216 05:14:51.499160   10516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:15:01.588515   10516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:15:11.629509   10516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:15:21.670840   10516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:15:31.712027   10516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-002200 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 2 (583.4683ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.2317158s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-902700 image build -t localhost/my-image:functional-902700 testdata\build --alsologtostderr                  │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ service │ functional-902700 service hello-node --url                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │                     │
	│ image   │ functional-902700 image ls                                                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format yaml --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format table --alsologtostderr                                                             │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ image   │ functional-902700 image ls --format json --alsologtostderr                                                              │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:40 UTC │ 16 Dec 25 04:40 UTC │
	│ delete  │ -p functional-902700                                                                                                    │ functional-902700 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │ 16 Dec 25 04:45 UTC │
	│ start   │ -p functional-002200 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:45 UTC │                     │
	│ start   │ -p functional-002200 --alsologtostderr -v=8                                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:53 UTC │                     │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:3.1                                                                   │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:3.3                                                                   │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add registry.k8s.io/pause:latest                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache add minikube-local-cache-test:functional-002200                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ functional-002200 cache delete minikube-local-cache-test:functional-002200                                              │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl images                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │                     │
	│ cache   │ functional-002200 cache reload                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ ssh     │ functional-002200 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │ 16 Dec 25 05:01 UTC │
	│ kubectl │ functional-002200 kubectl -- --context functional-002200 get pods                                                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:01 UTC │                     │
	│ start   │ -p functional-002200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:02:22
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:02:22.143364   13524 out.go:360] Setting OutFile to fd 1016 ...
	I1216 05:02:22.184929   13524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:22.184929   13524 out.go:374] Setting ErrFile to fd 816...
	I1216 05:02:22.184929   13524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:22.200191   13524 out.go:368] Setting JSON to false
	I1216 05:02:22.202193   13524 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2363,"bootTime":1765858978,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 05:02:22.202193   13524 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 05:02:22.207191   13524 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 05:02:22.209167   13524 notify.go:221] Checking for updates...
	I1216 05:02:22.213806   13524 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 05:02:22.217226   13524 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:02:22.219465   13524 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 05:02:22.221726   13524 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:02:22.223984   13524 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:02:22.226535   13524 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:02:22.226535   13524 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:02:22.342632   13524 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 05:02:22.345860   13524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:02:22.582056   13524 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-16 05:02:22.565555373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:02:22.589056   13524 out.go:179] * Using the docker driver based on existing profile
	I1216 05:02:22.591055   13524 start.go:309] selected driver: docker
	I1216 05:02:22.591055   13524 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:22.592055   13524 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:02:22.597056   13524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:02:22.818036   13524 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-16 05:02:22.800509482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:02:22.866190   13524 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:02:22.866190   13524 cni.go:84] Creating CNI manager for ""
	I1216 05:02:22.866190   13524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 05:02:22.866190   13524 start.go:353] cluster config:
	{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:22.870532   13524 out.go:179] * Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	I1216 05:02:22.874014   13524 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 05:02:22.876014   13524 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:02:22.880521   13524 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 05:02:22.880869   13524 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:02:22.880869   13524 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 05:02:22.880869   13524 cache.go:65] Caching tarball of preloaded images
	I1216 05:02:22.880869   13524 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 05:02:22.881393   13524 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 05:02:22.881584   13524 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json ...
	I1216 05:02:22.957945   13524 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:02:22.957945   13524 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:02:22.957945   13524 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:02:22.957945   13524 start.go:360] acquireMachinesLock for functional-002200: {Name:mk1997a1c039b59133e065727b36e0e15b10eb3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:02:22.957945   13524 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-002200"
	I1216 05:02:22.957945   13524 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:02:22.957945   13524 fix.go:54] fixHost starting: 
	I1216 05:02:22.964754   13524 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 05:02:23.020643   13524 fix.go:112] recreateIfNeeded on functional-002200: state=Running err=<nil>
	W1216 05:02:23.020643   13524 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:02:23.024655   13524 out.go:252] * Updating the running docker "functional-002200" container ...
	I1216 05:02:23.024655   13524 machine.go:94] provisionDockerMachine start ...
	I1216 05:02:23.028059   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.089226   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.089720   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.089720   13524 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:02:23.263587   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 05:02:23.263587   13524 ubuntu.go:182] provisioning hostname "functional-002200"
	I1216 05:02:23.269095   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.343706   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.344098   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.344098   13524 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-002200 && echo "functional-002200" | sudo tee /etc/hostname
	I1216 05:02:23.523871   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 05:02:23.527605   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.582373   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.582799   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.582799   13524 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-002200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-002200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-002200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:02:23.744731   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:02:23.744781   13524 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 05:02:23.744810   13524 ubuntu.go:190] setting up certificates
	I1216 05:02:23.744810   13524 provision.go:84] configureAuth start
	I1216 05:02:23.748413   13524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 05:02:23.805299   13524 provision.go:143] copyHostCerts
	I1216 05:02:23.805299   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 05:02:23.805299   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 05:02:23.805870   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 05:02:23.806787   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 05:02:23.806813   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 05:02:23.806957   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 05:02:23.807512   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 05:02:23.807512   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 05:02:23.807512   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 05:02:23.808114   13524 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-002200 san=[127.0.0.1 192.168.49.2 functional-002200 localhost minikube]
	I1216 05:02:24.024499   13524 provision.go:177] copyRemoteCerts
	I1216 05:02:24.027499   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:02:24.030499   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.084455   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:24.207064   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 05:02:24.231047   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 05:02:24.253218   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:02:24.278696   13524 provision.go:87] duration metric: took 533.8823ms to configureAuth
	I1216 05:02:24.278696   13524 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:02:24.279294   13524 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:02:24.283136   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.338661   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.338661   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.338661   13524 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 05:02:24.501259   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 05:02:24.501259   13524 ubuntu.go:71] root file system type: overlay
	I1216 05:02:24.503332   13524 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 05:02:24.506757   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.561628   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.562204   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.562204   13524 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 05:02:24.732222   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 05:02:24.736823   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.789603   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.790705   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.790705   13524 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 05:02:24.956843   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:02:24.956843   13524 machine.go:97] duration metric: took 1.9321739s to provisionDockerMachine
	I1216 05:02:24.956843   13524 start.go:293] postStartSetup for "functional-002200" (driver="docker")
	I1216 05:02:24.956843   13524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:02:24.961328   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:02:24.963780   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.018396   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.151694   13524 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:02:25.159738   13524 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:02:25.159738   13524 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:02:25.159738   13524 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 05:02:25.159738   13524 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 05:02:25.160372   13524 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 05:02:25.161048   13524 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> hosts in /etc/test/nested/copy/11704
	I1216 05:02:25.165137   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11704
	I1216 05:02:25.176929   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 05:02:25.202240   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts --> /etc/test/nested/copy/11704/hosts (40 bytes)
	I1216 05:02:25.226560   13524 start.go:296] duration metric: took 269.6889ms for postStartSetup
	I1216 05:02:25.230465   13524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:02:25.232786   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.287361   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.409366   13524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:02:25.419299   13524 fix.go:56] duration metric: took 2.4613371s for fixHost
	I1216 05:02:25.419299   13524 start.go:83] releasing machines lock for "functional-002200", held for 2.4613371s
	I1216 05:02:25.423876   13524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 05:02:25.479590   13524 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 05:02:25.483988   13524 ssh_runner.go:195] Run: cat /version.json
	I1216 05:02:25.483988   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.487582   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.542893   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.550987   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	W1216 05:02:25.660611   13524 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 05:02:25.682804   13524 ssh_runner.go:195] Run: systemctl --version
	I1216 05:02:25.696301   13524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:02:25.703847   13524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:02:25.708899   13524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:02:25.720784   13524 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:02:25.720820   13524 start.go:496] detecting cgroup driver to use...
	I1216 05:02:25.720861   13524 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 05:02:25.720884   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:02:25.746032   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1216 05:02:25.756672   13524 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 05:02:25.756737   13524 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 05:02:25.764577   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 05:02:25.778652   13524 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 05:02:25.782944   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 05:02:25.802561   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 05:02:25.822362   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 05:02:25.841368   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 05:02:25.860152   13524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:02:25.878804   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 05:02:25.897721   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 05:02:25.916509   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 05:02:25.935848   13524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:02:25.954408   13524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:02:25.972671   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:26.135013   13524 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 05:02:26.286857   13524 start.go:496] detecting cgroup driver to use...
	I1216 05:02:26.286857   13524 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 05:02:26.291710   13524 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 05:02:26.313739   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:02:26.335410   13524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:02:26.394402   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:02:26.416456   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 05:02:26.433425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:02:26.458250   13524 ssh_runner.go:195] Run: which cri-dockerd
	I1216 05:02:26.469192   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 05:02:26.479991   13524 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 05:02:26.508331   13524 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 05:02:26.653923   13524 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 05:02:26.807509   13524 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 05:02:26.808040   13524 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 05:02:26.830421   13524 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 05:02:26.853437   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:26.993507   13524 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 05:02:27.802449   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:02:27.823963   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 05:02:27.846489   13524 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 05:02:27.872589   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 05:02:27.893632   13524 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 05:02:28.032388   13524 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 05:02:28.173426   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:28.303647   13524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 05:02:28.327061   13524 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 05:02:28.347849   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:28.515228   13524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 05:02:28.617223   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 05:02:28.634479   13524 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 05:02:28.638575   13524 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 05:02:28.646251   13524 start.go:564] Will wait 60s for crictl version
	I1216 05:02:28.650257   13524 ssh_runner.go:195] Run: which crictl
	I1216 05:02:28.663129   13524 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:02:28.707678   13524 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 05:02:28.711140   13524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 05:02:28.754899   13524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 05:02:28.798065   13524 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 05:02:28.801328   13524 cli_runner.go:164] Run: docker exec -t functional-002200 dig +short host.docker.internal
	I1216 05:02:28.928679   13524 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 05:02:28.933317   13524 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 05:02:28.945787   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:29.006099   13524 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 05:02:29.009213   13524 kubeadm.go:884] updating cluster {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:02:29.009213   13524 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 05:02:29.012544   13524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 05:02:29.044964   13524 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-002200
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1216 05:02:29.045018   13524 docker.go:621] Images already preloaded, skipping extraction
	I1216 05:02:29.050176   13524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 05:02:29.078871   13524 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-002200
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1216 05:02:29.078871   13524 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:02:29.078871   13524 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1216 05:02:29.078871   13524 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-002200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:02:29.083733   13524 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 05:02:29.153386   13524 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 05:02:29.153441   13524 cni.go:84] Creating CNI manager for ""
	I1216 05:02:29.153441   13524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 05:02:29.153441   13524 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:02:29.153497   13524 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-002200 NodeName:functional-002200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:02:29.153740   13524 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-002200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:02:29.159735   13524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:02:29.170652   13524 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:02:29.175184   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:02:29.187845   13524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 05:02:29.208540   13524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:02:29.226431   13524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1216 05:02:29.250294   13524 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:02:29.261010   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:29.404128   13524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:02:30.007557   13524 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200 for IP: 192.168.49.2
	I1216 05:02:30.007557   13524 certs.go:195] generating shared ca certs ...
	I1216 05:02:30.007557   13524 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:02:30.008172   13524 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 05:02:30.008172   13524 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 05:02:30.008887   13524 certs.go:257] generating profile certs ...
	I1216 05:02:30.013750   13524 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key
	I1216 05:02:30.014359   13524 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742
	I1216 05:02:30.014359   13524 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key
	I1216 05:02:30.014952   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 05:02:30.015510   13524 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 05:02:30.016106   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 05:02:30.016106   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 05:02:30.017231   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:02:30.047196   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 05:02:30.070848   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:02:30.096702   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 05:02:30.121970   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 05:02:30.146884   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:02:30.173170   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:02:30.199629   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:02:30.226778   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 05:02:30.250105   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:02:30.272968   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 05:02:30.298291   13524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:02:30.318635   13524 ssh_runner.go:195] Run: openssl version
	I1216 05:02:30.332668   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.355358   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 05:02:30.372181   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.379909   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.384371   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.432373   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:02:30.447662   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.464870   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:02:30.481196   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.489322   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.492995   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.540388   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:02:30.558567   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.574821   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 05:02:30.592525   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.598815   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.603416   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.650141   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:02:30.666001   13524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:02:30.677986   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:02:30.724950   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:02:30.775114   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:02:30.821700   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:02:30.868594   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:02:30.916597   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:02:30.959171   13524 kubeadm.go:401] StartCluster: {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:30.963942   13524 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 05:02:30.994317   13524 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:02:31.005043   13524 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:02:31.005043   13524 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:02:31.009827   13524 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:02:31.023534   13524 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.026842   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:31.080676   13524 kubeconfig.go:125] found "functional-002200" server: "https://127.0.0.1:49316"
	I1216 05:02:31.087667   13524 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:02:31.101385   13524 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 04:45:52.574738576 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 05:02:29.239240136 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 05:02:31.101385   13524 kubeadm.go:1161] stopping kube-system containers ...
	I1216 05:02:31.105991   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 05:02:31.137859   13524 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 05:02:31.162569   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:02:31.173570   13524 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 04:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 04:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 16 04:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 04:49 /etc/kubernetes/scheduler.conf
	
	I1216 05:02:31.178070   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:02:31.193447   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:02:31.204464   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.208708   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:02:31.223814   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:02:31.236112   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.240050   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:02:31.256323   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:02:31.270390   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.274655   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:02:31.291834   13524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:02:31.309287   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.373785   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.743926   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.973968   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:32.044614   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:32.128503   13524 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:02:32.133080   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:32.634591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:33.135532   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:33.633951   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:34.133670   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:34.636362   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:35.133362   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:35.634567   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:36.133378   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:36.634652   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:37.133364   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:37.635212   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:38.133996   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:38.634136   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:39.133538   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:39.634806   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:40.133591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:40.633797   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:41.133611   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:41.634039   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:42.133614   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:42.634568   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:43.134027   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:43.634254   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:44.133984   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:44.634389   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:45.133761   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:45.634255   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:46.134409   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:46.634402   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:47.133336   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:47.634728   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:48.133723   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:48.634056   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:49.133313   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:49.634057   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:50.134418   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:50.633737   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:51.133246   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:51.634053   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:52.134086   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:52.633592   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:53.134909   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:53.633883   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:54.133900   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:54.633980   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:55.133861   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:55.634905   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:56.133623   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:56.633940   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:57.133423   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:57.635127   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:58.133876   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:58.634340   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:59.133894   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:59.633621   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:00.136295   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:00.633723   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:01.133850   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:01.630633   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:02.135818   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:02.635548   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:03.134173   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:03.634568   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:04.133911   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:04.634440   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:05.133383   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:05.633913   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:06.133618   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:06.635004   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:07.133967   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:07.634270   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:08.133741   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:08.633647   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:09.134149   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:09.634014   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:10.133536   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:10.633733   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:11.134705   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:11.634320   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:12.134680   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:12.634430   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:13.134597   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:13.634710   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:14.134733   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:14.634512   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:15.134218   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:15.633594   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:16.134090   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:16.634446   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:17.134183   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:17.634400   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:18.134566   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:18.633972   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:19.134271   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:19.634238   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:20.134883   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:20.634468   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:21.134017   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:21.634112   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:22.135187   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:22.634480   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:23.134672   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:23.633614   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:24.134339   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:24.634245   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:25.135181   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:25.634475   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:26.134348   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:26.634151   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:27.133880   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:27.633366   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:28.133826   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:28.634409   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:29.133350   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:29.633502   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:30.134183   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:30.633644   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:31.133961   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:31.634081   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:32.132156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:32.161948   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.161948   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:32.165532   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:32.190451   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.190451   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:32.194000   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:32.221132   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.221201   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:32.224735   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:32.251199   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.251265   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:32.254803   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:32.285399   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.285399   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:32.288927   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:32.316407   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.316407   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:32.320399   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:32.348258   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.348330   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:32.348330   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:32.348330   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:32.391508   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:32.391508   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:32.457156   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:32.457156   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:32.517211   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:32.517211   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:32.547816   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:32.547816   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:32.628349   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:32.619255   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.620222   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.621844   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.623691   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.625391   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:32.619255   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.620222   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.621844   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.623691   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.625391   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:35.133793   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:35.155411   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:35.187090   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.187090   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:35.190727   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:35.222945   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.223013   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:35.226777   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:35.253910   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.253910   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:35.257543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:35.284715   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.284715   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:35.288228   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:35.317179   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.317179   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:35.320898   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:35.347702   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.347702   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:35.351146   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:35.380831   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.380865   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:35.380865   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:35.380894   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:35.460624   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:35.451664   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.452907   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.454000   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455064   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455938   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:35.451664   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.452907   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.454000   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455064   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455938   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:35.460624   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:35.460624   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:35.503284   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:35.503284   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:35.556840   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:35.556840   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:35.619567   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:35.619567   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:38.155257   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:38.180004   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:38.207932   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.207932   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:38.211988   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:38.240313   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.240313   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:38.243787   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:38.271584   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.271584   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:38.275398   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:38.302890   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.302890   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:38.308028   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:38.334217   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.334217   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:38.338421   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:38.366179   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.366179   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:38.370864   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:38.399763   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.399763   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:38.399763   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:38.399763   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:38.427010   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:38.427010   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:38.520678   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:38.510609   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.512076   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.514160   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.516365   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.517345   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:38.510609   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.512076   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.514160   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.516365   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.517345   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:38.520678   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:38.520678   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:38.565076   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:38.565076   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:38.618166   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:38.618166   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:41.184770   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:41.209166   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:41.236776   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.236853   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:41.240392   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:41.270413   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.270413   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:41.274447   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:41.299898   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.299898   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:41.303698   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:41.331395   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.331395   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:41.335559   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:41.360930   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.360930   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:41.364502   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:41.391119   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.391119   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:41.394804   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:41.421862   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.421862   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:41.421862   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:41.421862   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:41.485064   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:41.485064   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:41.515166   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:41.515166   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:41.602242   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:41.591320   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.592209   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.596556   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.598648   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.599664   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:41.591320   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.592209   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.596556   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.598648   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.599664   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:41.602283   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:41.602283   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:41.643359   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:41.643359   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:44.196285   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:44.218200   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:44.246503   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.246585   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:44.251156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:44.281646   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.281711   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:44.285404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:44.314582   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.314582   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:44.318424   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:44.345658   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.345658   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:44.349423   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:44.378211   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.378272   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:44.381956   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:44.410544   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.410544   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:44.414620   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:44.445500   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.445500   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:44.445500   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:44.445500   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:44.507872   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:44.507872   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:44.538767   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:44.538767   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:44.622136   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:44.612744   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.613558   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.618132   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.619496   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.620483   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:44.612744   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.613558   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.618132   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.619496   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.620483   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:44.622136   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:44.622136   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:44.663418   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:44.663418   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:47.212335   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:47.235078   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:47.263884   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.263884   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:47.267298   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:47.296349   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.296349   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:47.300145   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:47.328463   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.328463   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:47.332047   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:47.360277   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.360277   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:47.365253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:47.394405   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.394405   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:47.398327   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:47.424342   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.424342   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:47.427553   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:47.457407   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.457407   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:47.457407   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:47.457482   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:47.518376   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:47.518376   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:47.549518   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:47.549518   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:47.633807   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:47.621666   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.623404   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.625321   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.626276   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.627968   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:47.621666   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.623404   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.625321   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.626276   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.627968   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:47.633807   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:47.633807   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:47.677347   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:47.677347   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:50.228661   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:50.251356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:50.280242   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.280242   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:50.284021   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:50.312131   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.312131   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:50.316156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:50.345649   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.345649   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:50.349420   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:50.378641   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.378641   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:50.382647   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:50.412461   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.412461   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:50.416175   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:50.442845   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.442845   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:50.446814   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:50.475928   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.475928   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:50.475928   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:50.475928   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:50.557550   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:50.546013   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.546957   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.548058   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550001   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550942   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:50.546013   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.546957   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.548058   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550001   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550942   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:50.557550   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:50.557550   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:50.598249   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:50.599249   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:50.649236   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:50.649236   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:50.708474   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:50.708474   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:53.243724   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:53.265421   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:53.296102   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.296102   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:53.299979   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:53.326976   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.326976   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:53.330578   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:53.359456   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.359456   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:53.363072   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:53.390071   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.390071   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:53.393691   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:53.420871   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.420871   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:53.424512   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:53.453800   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.453800   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:53.457145   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:53.484517   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.484517   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:53.484517   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:53.484517   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:53.528040   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:53.528040   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:53.587553   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:53.587553   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:53.617548   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:53.617548   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:53.700026   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:53.688408   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.689532   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.690618   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692085   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692939   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:53.688408   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.689532   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.690618   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692085   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692939   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:53.700026   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:53.700026   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:56.246963   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:56.268638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:56.299094   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.299094   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:56.302639   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:56.332517   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.332560   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:56.336308   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:56.365426   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.365426   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:56.369138   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:56.397544   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.397619   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:56.401112   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:56.429549   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.429549   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:56.433429   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:56.460742   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.460742   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:56.464610   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:56.491304   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.491304   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:56.491304   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:56.491304   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:56.537801   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:56.537801   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:56.596883   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:56.596883   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:56.627551   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:56.627551   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:56.716773   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:56.704143   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.705738   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.709298   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.710507   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.711360   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:56.704143   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.705738   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.709298   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.710507   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.711360   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:56.716773   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:56.716773   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:59.265591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:59.287053   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:59.314567   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.314567   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:59.318471   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:59.344778   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.344778   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:59.348198   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:59.377352   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.377352   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:59.381355   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:59.409757   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.409757   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:59.413264   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:59.442030   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.442030   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:59.447566   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:59.476800   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.476800   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:59.480486   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:59.510562   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.510562   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:59.510562   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:59.510562   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:59.594557   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:59.583933   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.585461   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.586898   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.588167   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.589054   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:59.583933   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.585461   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.586898   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.588167   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.589054   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:59.594557   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:59.594557   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:59.635862   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:59.635862   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:59.680837   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:59.680837   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:59.742598   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:59.742598   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:02.276919   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:02.299620   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:02.328580   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.328580   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:02.332001   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:02.362532   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.362532   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:02.367709   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:02.398639   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.398639   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:02.402478   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:02.429515   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.429515   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:02.434024   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:02.462711   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.462771   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:02.465977   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:02.496760   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.496760   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:02.500343   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:02.528038   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.528082   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:02.528082   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:02.528117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:02.591712   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:02.591712   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:02.621318   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:02.621318   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:02.725138   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:02.714257   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.715298   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.717709   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.718396   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.720971   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:02.714257   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.715298   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.717709   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.718396   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.720971   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:02.725138   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:02.725138   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:02.765954   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:02.765954   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:05.326035   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:05.347411   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:05.372745   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.372745   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:05.376358   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:05.403930   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.403930   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:05.406957   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:05.437512   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.437512   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:05.441038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:05.468927   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.468973   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:05.472507   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:05.499239   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.499239   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:05.503303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:05.529451   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.529512   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:05.533654   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:05.561652   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.561652   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:05.561652   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:05.561652   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:05.604232   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:05.604232   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:05.656685   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:05.656714   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:05.718388   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:05.718388   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:05.748808   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:05.748808   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:05.832901   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:05.823709   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.825763   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.826966   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.828014   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.829256   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:05.823709   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.825763   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.826966   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.828014   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.829256   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:08.338915   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:08.361157   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:08.392451   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.392451   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:08.396684   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:08.423351   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.423351   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:08.429970   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:08.457365   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.457365   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:08.460969   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:08.489550   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.489550   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:08.492908   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:08.522740   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.522740   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:08.526558   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:08.555230   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.555230   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:08.558834   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:08.588132   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.588132   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:08.588132   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:08.588132   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:08.648570   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:08.648570   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:08.679084   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:08.679117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:08.767825   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:08.758330   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.759809   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.761174   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763021   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763841   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:08.758330   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.759809   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.761174   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763021   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763841   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:08.767825   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:08.767825   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:08.813493   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:08.813493   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:11.371323   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:11.393671   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:11.423912   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.423912   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:11.426874   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:11.457321   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.457321   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:11.460999   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:11.491719   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.491742   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:11.495112   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:11.524188   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.524188   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:11.530312   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:11.558213   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.558213   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:11.562148   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:11.587695   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.587695   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:11.591166   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:11.618568   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.618568   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:11.618568   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:11.618568   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:11.700342   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:11.691304   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.692191   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.695157   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.697226   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.698318   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:11.691304   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.692191   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.695157   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.697226   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.698318   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:11.700342   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:11.700342   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:11.741856   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:11.741856   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:11.788648   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:11.788648   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:11.849193   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:11.849193   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:14.383220   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:14.404569   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:14.434777   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.434777   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:14.438799   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:14.466806   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.466806   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:14.470274   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:14.496413   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.496413   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:14.500050   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:14.531727   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.531727   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:14.535294   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:14.563393   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.563393   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:14.567315   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:14.592541   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.592541   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:14.596104   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:14.628287   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.628287   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:14.628287   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:14.628287   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:14.692122   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:14.692122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:14.720935   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:14.720935   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:14.809952   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:14.799452   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.800203   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.804585   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.805531   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.807780   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:14.799452   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.800203   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.804585   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.805531   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.807780   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:14.809952   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:14.809952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:14.853842   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:14.853842   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:17.408509   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:17.431899   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:17.459863   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.459863   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:17.463546   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:17.489686   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.489686   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:17.493208   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:17.521484   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.521484   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:17.525013   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:17.552847   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.552847   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:17.556723   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:17.583677   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.583677   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:17.587267   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:17.613916   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.613916   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:17.617383   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:17.649827   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.649827   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:17.649827   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:17.649827   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:17.697170   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:17.697170   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:17.754919   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:17.754919   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:17.784122   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:17.784122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:17.864432   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:17.854159   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.855168   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.856276   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.857030   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.859249   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:17.854159   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.855168   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.856276   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.857030   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.859249   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:17.864463   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:17.864463   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:20.414214   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:20.438174   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:20.468253   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.468253   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:20.471621   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:20.500056   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.500056   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:20.503669   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:20.535901   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.535901   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:20.539210   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:20.566366   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.566366   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:20.570012   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:20.599351   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.599351   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:20.603383   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:20.629474   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.629474   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:20.636460   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:20.662795   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.662795   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:20.662795   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:20.662795   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:20.723615   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:20.723615   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:20.752636   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:20.752636   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:20.837861   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:20.826007   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.827210   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.829865   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.831983   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.832937   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:20.826007   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.827210   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.829865   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.831983   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.832937   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:20.837861   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:20.837861   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:20.879492   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:20.879492   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:23.436591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:23.459603   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:23.484610   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.485910   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:23.489800   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:23.516517   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.516517   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:23.520034   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:23.549815   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.549815   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:23.553056   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:23.583026   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.583026   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:23.586920   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:23.615403   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.615403   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:23.618776   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:23.647271   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.647271   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:23.650983   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:23.677461   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.677520   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:23.677520   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:23.677559   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:23.743913   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:23.743913   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:23.773462   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:23.773462   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:23.862441   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:23.853159   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.854280   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.855338   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.856368   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.857459   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:23.853159   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.854280   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.855338   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.856368   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.857459   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:23.862502   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:23.862526   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:23.903963   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:23.903963   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:26.456802   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:26.479694   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:26.507859   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.507859   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:26.511781   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:26.537683   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.537683   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:26.541445   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:26.569611   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.569611   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:26.573478   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:26.604349   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.604377   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:26.609300   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:26.638784   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.638784   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:26.641986   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:26.669720   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.669720   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:26.673932   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:26.700387   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.700387   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:26.700387   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:26.700387   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:26.766000   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:26.766000   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:26.796095   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:26.796095   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:26.882695   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:26.871861   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.872835   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.874610   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.876128   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.877304   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:26.871861   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.872835   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.874610   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.876128   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.877304   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:26.882695   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:26.882695   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:26.924768   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:26.924768   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:29.478546   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:29.499904   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:29.527110   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.527110   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:29.531186   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:29.558221   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.558221   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:29.561810   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:29.591838   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.591838   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:29.596165   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:29.623642   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.623642   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:29.627192   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:29.652493   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.652526   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:29.655375   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:29.682914   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.682957   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:29.686351   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:29.714123   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.714123   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:29.714123   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:29.714123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:29.774899   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:29.774899   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:29.802342   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:29.802342   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:29.885111   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:29.875757   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.876923   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.877963   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.879017   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.880228   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:29.875757   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.876923   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.877963   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.879017   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.880228   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:29.885242   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:29.885242   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:29.926184   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:29.926184   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:32.480583   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:32.502826   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:32.533439   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.533463   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:32.537047   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:32.564845   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.564845   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:32.568203   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:32.595465   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.595526   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:32.598404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:32.626657   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.626657   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:32.630597   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:32.656354   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.656354   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:32.660989   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:32.690899   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.690920   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:32.693919   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:32.721353   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.721353   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:32.721353   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:32.721353   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:32.783967   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:32.783967   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:32.813914   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:32.813914   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:32.893277   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:32.884279   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.884964   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.887572   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.888527   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.889778   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:32.884279   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.884964   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.887572   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.888527   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.889778   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:32.893277   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:32.893277   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:32.936887   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:32.936887   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:35.508248   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:35.532690   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:35.562568   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.562568   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:35.566845   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:35.593817   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.593817   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:35.597629   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:35.626272   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.626272   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:35.629313   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:35.660523   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.660523   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:35.664731   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:35.696512   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.696512   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:35.699886   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:35.730008   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.730008   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:35.733873   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:35.759351   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.759351   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:35.760366   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:35.760366   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:35.805169   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:35.805169   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:35.871943   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:35.871943   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:35.902094   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:35.902094   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:35.984144   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:35.975517   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.976548   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.977611   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.978767   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.980051   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:35.975517   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.976548   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.977611   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.978767   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.980051   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:35.984671   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:35.984671   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:38.532401   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:38.553975   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:38.587094   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.587163   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:38.590542   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:38.615078   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.615078   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:38.620176   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:38.646601   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.646601   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:38.649820   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:38.678850   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.678850   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:38.681929   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:38.708321   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.708380   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:38.711681   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:38.740769   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.740859   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:38.744600   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:38.773706   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.773706   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:38.773706   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:38.773706   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:38.802001   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:38.802997   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:38.884848   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:38.877013   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.878352   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.879473   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.880593   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.881944   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:38.877013   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.878352   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.879473   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.880593   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.881944   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:38.884848   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:38.884848   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:38.927525   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:38.927525   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:38.973952   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:38.973952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:41.541093   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:41.564290   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:41.592889   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.592889   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:41.597074   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:41.626087   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.626087   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:41.630076   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:41.656581   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.656581   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:41.660739   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:41.689073   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.689073   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:41.692998   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:41.718767   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.718767   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:41.722605   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:41.750884   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.750884   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:41.754652   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:41.780815   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.780815   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:41.780815   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:41.780815   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:41.872864   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:41.862126   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.863102   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.867559   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.868025   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.869518   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:41.862126   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.863102   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.867559   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.868025   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.869518   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:41.872864   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:41.872864   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:41.911229   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:41.911229   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:41.958721   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:41.958721   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:42.017563   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:42.017563   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:44.553294   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:44.576740   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:44.607009   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.607009   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:44.610623   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:44.635971   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.635971   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:44.639338   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:44.664675   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.664675   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:44.667916   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:44.696295   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.696329   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:44.700356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:44.727661   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.727661   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:44.731273   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:44.759144   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.759174   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:44.762982   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:44.790033   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.790033   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:44.790080   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:44.790080   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:44.817221   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:44.817221   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:44.896592   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:44.887275   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.888226   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.890805   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.892527   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.894299   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:44.887275   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.888226   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.890805   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.892527   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.894299   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:44.896592   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:44.896592   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:44.940361   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:44.940361   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:44.989348   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:44.989348   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:47.553461   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:47.576347   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:47.606540   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.606602   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:47.610221   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:47.637575   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.637634   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:47.640884   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:47.669743   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.669743   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:47.673137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:47.702380   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.702380   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:47.706154   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:47.732891   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.732891   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:47.736068   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:47.765439   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.765464   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:47.769425   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:47.799223   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.799223   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:47.799223   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:47.799223   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:47.845720   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:47.846247   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:47.903222   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:47.903222   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:47.932986   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:47.933995   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:48.016069   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:48.005024   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.005860   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.008285   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.009577   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.010646   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:48.005024   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.005860   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.008285   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.009577   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.010646   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:48.016069   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:48.016069   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:50.561698   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:50.585162   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:50.615237   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.615237   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:50.618917   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:50.647113   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.647141   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:50.650625   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:50.677020   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.677020   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:50.680813   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:50.708471   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.708495   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:50.712156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:50.739340   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.739340   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:50.744296   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:50.773916   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.773916   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:50.778432   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:50.806364   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.806443   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:50.806443   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:50.806443   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:50.833814   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:50.833814   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:50.931229   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:50.917758   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.919179   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.923691   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.924605   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.925814   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:50.917758   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.919179   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.923691   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.924605   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.925814   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:50.931285   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:50.931285   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:50.973466   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:50.973466   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:51.020564   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:51.020564   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:53.590321   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:53.613378   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:53.645084   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.645084   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:53.648887   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:53.675145   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.675145   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:53.678830   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:53.704801   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.704801   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:53.708956   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:53.735945   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.736019   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:53.740579   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:53.766771   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.766771   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:53.771626   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:53.799949   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.799949   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:53.804011   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:53.831885   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.831885   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:53.831944   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:53.831944   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:53.878883   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:53.878883   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:53.941915   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:53.941915   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:53.971778   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:53.971778   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:54.047386   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:54.036815   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038092   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038978   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.040350   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.041669   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:54.036815   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038092   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038978   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.040350   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.041669   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:54.047386   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:54.047386   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:56.597206   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:56.623446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:56.654753   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.654783   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:56.657638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:56.687889   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.687889   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:56.691181   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:56.718606   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.718677   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:56.722343   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:56.748289   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.748289   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:56.752614   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:56.782030   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.782030   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:56.785674   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:56.813229   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.813229   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:56.817199   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:56.848354   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.848354   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:56.848354   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:56.848354   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:56.920172   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:56.920172   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:56.950025   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:56.950025   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:57.027703   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:57.017393   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.018120   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.020276   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.021295   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.022786   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:57.017393   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.018120   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.020276   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.021295   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.022786   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:57.027703   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:57.027703   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:57.067904   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:57.067904   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:59.623468   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:59.644700   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:59.675762   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.675762   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:59.679255   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:59.710350   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.710350   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:59.714080   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:59.743398   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.743398   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:59.747303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:59.777836   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.777836   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:59.781321   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:59.806990   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.806990   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:59.811081   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:59.839112   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.839112   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:59.842923   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:59.870519   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.870519   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:59.870519   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:59.870519   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:59.931436   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:59.931436   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:59.961074   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:59.961074   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:00.046620   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:00.036147   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.037355   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.038578   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.039491   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.042183   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:00.036147   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.037355   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.038578   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.039491   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.042183   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:00.046620   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:00.046620   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:00.087812   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:00.087812   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:02.639801   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:02.661744   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:02.693879   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.693879   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:02.697168   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:02.724574   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.724623   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:02.728234   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:02.756463   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.756463   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:02.760215   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:02.785297   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.785297   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:02.789630   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:02.815967   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.815967   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:02.820071   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:02.846212   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.846212   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:02.849605   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:02.880460   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.880501   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:02.880501   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:02.880501   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:02.942651   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:02.942651   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:02.973117   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:02.973117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:03.055647   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:03.045630   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.046516   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.048690   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.049939   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.051104   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:03.045630   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.046516   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.048690   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.049939   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.051104   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:03.055647   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:03.055647   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:03.097391   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:03.097391   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:05.655285   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:05.681408   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:05.711017   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.711017   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:05.714391   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:05.744313   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.744382   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:05.748472   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:05.778641   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.778641   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:05.782574   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:05.808201   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.808201   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:05.811215   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:05.845094   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.845094   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:05.849400   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:05.889250   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.889250   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:05.892728   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:05.921657   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.921657   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:05.921657   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:05.921657   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:05.983252   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:05.983252   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:06.013531   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:06.013531   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:06.094324   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:06.085481   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.087264   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.088438   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.089540   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.090612   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:06.085481   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.087264   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.088438   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.089540   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.090612   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:06.094324   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:06.094324   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:06.136404   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:06.136404   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:08.693146   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:08.716116   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:08.744861   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.744861   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:08.748618   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:08.778582   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.778582   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:08.782132   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:08.810955   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.810955   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:08.814794   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:08.844554   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.844554   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:08.848903   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:08.875472   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.875472   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:08.879360   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:08.907445   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.907445   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:08.911290   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:08.937114   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.937114   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:08.937114   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:08.937114   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:08.999016   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:08.999016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:09.029260   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:09.029260   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:09.117123   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:09.107890   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.109150   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.110216   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.111522   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.112791   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:09.107890   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.109150   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.110216   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.111522   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.112791   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:09.117123   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:09.117123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:09.158878   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:09.158878   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:11.716383   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:11.739574   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:11.772194   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.772194   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:11.776083   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:11.808831   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.808831   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:11.814900   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:11.843123   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.843123   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:11.847084   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:11.877406   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.877406   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:11.883404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:11.909497   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.909497   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:11.915877   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:11.941644   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.941644   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:11.947889   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:11.975058   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.975058   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:11.975058   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:11.975058   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:12.037229   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:12.037229   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:12.066794   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:12.066794   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:12.145714   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:12.137677   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.138809   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.139798   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.141019   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.142446   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:12.137677   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.138809   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.139798   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.141019   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.142446   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:12.145714   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:12.145752   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:12.189122   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:12.189122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:14.741253   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:14.764365   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:14.795995   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.795995   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:14.799654   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:14.827360   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.827360   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:14.830473   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:14.877262   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.877262   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:14.881028   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:14.907013   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.907013   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:14.910966   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:14.940012   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.940012   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:14.943533   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:14.973219   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.973219   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:14.977027   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:15.005016   13524 logs.go:282] 0 containers: []
	W1216 05:05:15.005016   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:15.005016   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:15.005016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:15.068144   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:15.068144   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:15.097979   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:15.097979   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:15.178592   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:15.170495   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.171184   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.173358   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.174428   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.175575   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:15.170495   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.171184   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.173358   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.174428   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.175575   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:15.178592   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:15.178592   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:15.226390   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:15.226390   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:17.780482   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:17.801597   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:17.829508   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.829533   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:17.833177   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:17.859642   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.859642   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:17.862985   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:17.890800   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.890800   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:17.893950   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:17.924358   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.924358   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:17.927717   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:17.953300   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.953300   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:17.957301   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:17.985802   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.985802   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:17.989495   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:18.016952   13524 logs.go:282] 0 containers: []
	W1216 05:05:18.016952   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:18.016952   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:18.016952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:18.106203   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:18.093536   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.094540   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.097011   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.098056   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.099323   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:18.093536   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.094540   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.097011   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.098056   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.099323   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:18.106203   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:18.106203   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:18.149655   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:18.149655   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:18.195681   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:18.195707   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:18.257349   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:18.257349   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:20.791461   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:20.812868   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:20.842707   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.842740   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:20.846536   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:20.875894   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.875894   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:20.879319   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:20.909010   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.909010   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:20.912866   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:20.941362   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.941362   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:20.945334   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:20.973226   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.973226   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:20.977453   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:21.004793   13524 logs.go:282] 0 containers: []
	W1216 05:05:21.004793   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:21.008493   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:21.034240   13524 logs.go:282] 0 containers: []
	W1216 05:05:21.034240   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:21.034240   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:21.034240   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:21.098331   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:21.098331   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:21.129173   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:21.129173   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:21.218614   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:21.206034   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.207338   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.209505   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.211860   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.213420   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:21.206034   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.207338   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.209505   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.211860   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.213420   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:21.218614   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:21.218614   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:21.261020   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:21.261020   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:23.818479   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:23.840022   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:23.873329   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.873385   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:23.877280   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:23.903358   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.903395   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:23.907325   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:23.934336   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.934336   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:23.938027   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:23.966398   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.966398   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:23.969989   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:23.996674   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.996674   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:24.000315   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:24.027001   13524 logs.go:282] 0 containers: []
	W1216 05:05:24.027001   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:24.030715   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:24.059648   13524 logs.go:282] 0 containers: []
	W1216 05:05:24.059648   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:24.059648   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:24.059648   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:24.120785   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:24.120785   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:24.155678   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:24.155678   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:24.234706   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:24.223173   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.224035   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.226157   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.227148   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.228146   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:24.223173   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.224035   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.226157   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.227148   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.228146   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:24.234706   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:24.234706   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:24.278016   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:24.278016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:26.831237   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:26.852827   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:26.880996   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.880996   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:26.884822   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:26.912292   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.912292   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:26.916020   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:26.941600   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.941623   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:26.945391   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:26.972003   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.972068   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:26.975790   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:27.003933   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.003933   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:27.007292   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:27.033829   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.033861   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:27.037496   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:27.065486   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.065486   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:27.065486   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:27.065486   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:27.129425   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:27.129425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:27.158980   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:27.158980   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:27.240946   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:27.230164   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.231001   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.233339   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.234319   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.235558   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:27.230164   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.231001   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.233339   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.234319   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.235558   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:27.240946   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:27.240946   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:27.282635   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:27.282635   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:29.835505   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:29.856873   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:29.887755   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.887755   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:29.891311   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:29.919341   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.919341   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:29.923153   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:29.949569   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.949569   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:29.953446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:29.982150   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.982217   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:29.985852   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:30.012079   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.012079   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:30.017875   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:30.044535   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.044597   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:30.048212   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:30.075190   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.075223   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:30.075223   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:30.075254   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:30.118411   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:30.118411   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:30.169092   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:30.169092   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:30.224666   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:30.224666   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:30.257052   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:30.257052   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:30.345423   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:30.334921   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.335618   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.339017   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.340268   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.341411   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:30.334921   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.335618   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.339017   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.340268   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.341411   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:32.850775   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:32.874038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:32.905193   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.905193   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:32.908688   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:32.935829   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.935829   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:32.939716   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:32.967717   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.967717   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:32.971291   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:32.997404   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.997452   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:33.001346   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:33.033845   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.033845   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:33.037379   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:33.065410   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.065410   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:33.070454   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:33.097202   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.097202   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:33.097202   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:33.097276   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:33.159607   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:33.159607   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:33.190136   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:33.190288   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:33.270012   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:33.258945   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.259847   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.262213   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.263220   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.265983   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:33.258945   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.259847   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.262213   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.263220   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.265983   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:33.270012   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:33.270012   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:33.313088   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:33.313088   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:35.881230   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:35.903303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:35.933399   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.933399   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:35.936917   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:35.963670   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.963670   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:35.967376   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:35.993260   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.993260   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:35.999083   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:36.022547   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.022547   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:36.026765   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:36.058006   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.058006   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:36.061823   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:36.090079   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.090079   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:36.096186   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:36.124272   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.124272   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:36.124343   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:36.124343   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:36.187477   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:36.187477   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:36.217944   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:36.217944   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:36.308580   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:36.295229   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.296002   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.301995   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.302833   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.305048   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:36.295229   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.296002   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.301995   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.302833   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.305048   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:36.308580   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:36.308580   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:36.350059   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:36.350059   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:38.904862   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:38.926217   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:38.956469   13524 logs.go:282] 0 containers: []
	W1216 05:05:38.956469   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:38.959962   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:38.986769   13524 logs.go:282] 0 containers: []
	W1216 05:05:38.986769   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:38.990008   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:39.018465   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.018465   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:39.021941   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:39.050244   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.050244   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:39.054097   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:39.080344   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.080344   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:39.084719   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:39.111908   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.111908   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:39.116234   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:39.145295   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.145295   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:39.145329   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:39.145329   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:39.190461   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:39.190461   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:39.250498   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:39.250498   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:39.281744   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:39.281744   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:39.360278   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:39.352154   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.353091   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.354283   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.355420   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.356645   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:39.352154   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.353091   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.354283   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.355420   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.356645   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:39.360278   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:39.360278   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:41.907417   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:41.930781   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:41.959028   13524 logs.go:282] 0 containers: []
	W1216 05:05:41.959028   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:41.962118   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:41.992218   13524 logs.go:282] 0 containers: []
	W1216 05:05:41.992218   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:41.995638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:42.022706   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.022706   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:42.025963   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:42.058549   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.058591   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:42.063102   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:42.092433   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.092433   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:42.096210   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:42.124136   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.124136   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:42.127883   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:42.157397   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.157397   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:42.157397   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:42.157397   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:42.208439   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:42.208439   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:42.271217   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:42.271217   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:42.299862   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:42.300836   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:42.380228   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:42.370908   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.371801   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.372982   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.375094   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.376194   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:42.370908   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.371801   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.372982   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.375094   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.376194   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:42.380228   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:42.380270   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:44.926983   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:44.949386   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:44.980885   13524 logs.go:282] 0 containers: []
	W1216 05:05:44.980885   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:44.984714   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:45.011775   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.011775   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:45.016515   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:45.044937   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.044937   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:45.048973   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:45.076493   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.076493   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:45.080322   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:45.107894   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.107894   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:45.111226   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:45.140033   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.140033   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:45.145613   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:45.173403   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.173403   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:45.173403   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:45.173403   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:45.234157   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:45.234157   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:45.263615   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:45.263615   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:45.340483   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:45.331453   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.332466   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.333768   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.334753   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.335717   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:45.331453   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.332466   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.333768   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.334753   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.335717   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:45.340483   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:45.340483   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:45.385573   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:45.385573   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:47.944179   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:47.965345   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:47.994755   13524 logs.go:282] 0 containers: []
	W1216 05:05:47.994755   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:47.997830   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:48.025155   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.025155   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:48.028458   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:48.056617   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.056617   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:48.060320   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:48.089066   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.089066   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:48.092698   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:48.121598   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.121628   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:48.125680   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:48.157191   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.157191   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:48.160973   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:48.188668   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.188668   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:48.188668   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:48.188668   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:48.244524   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:48.244524   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:48.275889   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:48.275889   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:48.367425   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:48.355136   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.356146   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.358362   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.360588   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.361743   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:48.355136   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.356146   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.358362   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.360588   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.361743   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:48.367425   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:48.367425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:48.406776   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:48.406776   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:50.963363   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:50.986681   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:51.017484   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.017484   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:51.021749   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:51.049184   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.049184   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:51.052784   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:51.083798   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.083798   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:51.087092   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:51.116150   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.116181   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:51.119540   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:51.148592   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.148592   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:51.152543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:51.182496   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.182496   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:51.186206   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:51.212397   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.212397   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:51.212397   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:51.212397   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:51.294464   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:51.283439   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.284417   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.286178   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.287320   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.289084   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:51.283439   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.284417   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.286178   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.287320   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.289084   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:51.294464   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:51.294464   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:51.336829   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:51.336829   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:51.385258   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:51.385258   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:51.444652   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:51.444652   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:53.980590   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:54.001769   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:54.030775   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.030775   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:54.034817   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:54.062359   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.062385   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:54.065740   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:54.093857   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.093857   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:54.097137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:54.127972   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.127972   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:54.131415   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:54.158859   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.158859   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:54.162622   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:54.192077   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.192077   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:54.195448   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:54.223226   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.223226   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:54.223226   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:54.223226   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:54.267495   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:54.268494   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:54.318458   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:54.318458   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:54.379319   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:54.379319   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:54.409390   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:54.409390   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:54.497343   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:54.486388   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.487502   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.488610   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.489914   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.490890   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:54.486388   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.487502   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.488610   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.489914   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.490890   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:57.001942   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:57.024505   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:57.051420   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.051420   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:57.055095   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:57.086650   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.086650   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:57.090451   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:57.116570   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.116570   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:57.119823   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:57.150064   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.150064   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:57.154328   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:57.180973   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.180973   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:57.185282   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:57.216597   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.216597   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:57.220216   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:57.246877   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.246877   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:57.246945   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:57.246945   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:57.308963   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:57.308963   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:57.340818   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:57.340818   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:57.440976   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:57.429668   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.430817   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.432070   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.433114   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.434207   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:57.429668   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.430817   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.432070   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.433114   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.434207   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:57.440976   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:57.440976   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:57.485863   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:57.485863   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:00.038815   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:00.060757   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:00.089849   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.089849   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:00.093819   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:00.121426   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.121426   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:00.127493   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:00.155063   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.155063   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:00.158469   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:00.186269   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.186269   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:00.191767   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:00.220680   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.220680   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:00.224397   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:00.251492   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.251492   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:00.255561   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:00.282084   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.282084   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:00.282084   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:00.282084   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:00.340687   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:00.340687   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:00.369302   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:00.369302   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:00.450456   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:00.439681   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.441111   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.443533   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.444882   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.446042   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:00.439681   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.441111   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.443533   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.444882   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.446042   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:00.450456   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:00.450456   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:00.494633   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:00.494633   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:03.047228   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:03.070414   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:03.100869   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.100869   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:03.106543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:03.133873   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.133873   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:03.137304   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:03.169605   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.169605   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:03.173548   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:03.203086   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.203086   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:03.206980   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:03.233903   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.233903   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:03.239541   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:03.269916   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.269940   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:03.273671   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:03.301055   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.301055   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:03.301055   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:03.301055   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:03.361314   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:03.361314   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:03.391207   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:03.391207   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:03.477457   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:03.467080   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.468297   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.470723   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.472023   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.473419   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:03.467080   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.468297   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.470723   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.472023   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.473419   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:03.477457   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:03.477457   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:03.517504   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:03.517504   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:06.085750   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:06.108609   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:06.136944   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.136944   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:06.141119   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:06.168680   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.168680   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:06.172752   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:06.201039   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.201039   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:06.204417   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:06.234173   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.234173   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:06.237313   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:06.268910   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.268910   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:06.272680   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:06.302995   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.303025   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:06.306434   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:06.343040   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.343040   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:06.343040   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:06.343040   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:06.404754   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:06.404754   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:06.438236   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:06.438236   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:06.533746   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:06.523818   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.524791   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.526159   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.527425   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.528623   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:06.523818   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.524791   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.526159   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.527425   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.528623   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:06.533746   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:06.533746   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:06.587048   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:06.587048   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:09.143712   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:09.167180   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:09.197847   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.197847   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:09.201143   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:09.231047   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.231047   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:09.234772   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:09.263936   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.263936   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:09.267839   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:09.293408   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.293408   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:09.297079   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:09.325926   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.325926   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:09.329675   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:09.354839   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.354839   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:09.358679   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:09.386294   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.386294   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:09.386294   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:09.386294   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:09.446046   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:09.446046   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:09.474123   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:09.474123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:09.570430   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:09.552344   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.553464   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.562467   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.564909   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.565822   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:09.552344   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.553464   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.562467   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.564909   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.565822   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:09.570430   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:09.570430   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:09.612996   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:09.612996   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:12.162991   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:12.185413   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:12.220706   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.220706   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:12.224471   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:12.252012   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.252085   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:12.255507   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:12.287146   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.287146   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:12.291350   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:12.322209   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.322209   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:12.326285   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:12.352463   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.352463   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:12.356344   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:12.384416   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.384445   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:12.388099   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:12.416249   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.416249   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:12.416249   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:12.416249   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:12.457279   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:12.457279   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:12.504035   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:12.504035   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:12.565073   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:12.565073   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:12.594834   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:12.594834   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:12.671197   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:12.662068   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.663058   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.664278   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.666376   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.667861   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:12.662068   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.663058   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.664278   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.666376   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.667861   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:15.176441   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:15.198949   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:15.228375   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.228375   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:15.232284   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:15.260859   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.260859   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:15.264596   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:15.289482   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.289482   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:15.293332   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:15.321841   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.321889   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:15.325366   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:15.355205   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.355205   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:15.359602   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:15.391155   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.391155   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:15.395288   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:15.422696   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.422696   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:15.422696   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:15.422696   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:15.509885   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:15.501731   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.502732   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.503898   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.505461   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.506268   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:15.501731   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.502732   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.503898   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.505461   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.506268   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:15.509885   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:15.509885   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:15.550722   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:15.550722   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:15.597215   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:15.598218   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:15.655170   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:15.655170   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:18.189600   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:18.214190   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:18.244833   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.244918   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:18.248323   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:18.274826   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.274826   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:18.278263   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:18.305755   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.305755   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:18.310038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:18.339762   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.339762   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:18.343253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:18.372235   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.372235   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:18.376253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:18.405785   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.405785   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:18.410335   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:18.436279   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.436279   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:18.436279   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:18.436279   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:18.477830   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:18.477830   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:18.533284   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:18.533302   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:18.592952   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:18.592952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:18.623173   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:18.623173   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:18.706158   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:18.695935   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.696872   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.699160   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.700510   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.701521   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:18.695935   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.696872   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.699160   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.700510   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.701521   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:21.211431   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:21.233375   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:21.263996   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.263996   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:21.267857   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:21.296614   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.296614   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:21.300408   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:21.327435   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.327435   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:21.331241   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:21.361684   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.361684   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:21.365531   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:21.393896   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.393896   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:21.397371   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:21.427885   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.427885   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:21.431500   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:21.459772   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.459772   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:21.459772   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:21.459772   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:21.522041   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:21.522041   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:21.550901   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:21.550901   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:21.638725   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:21.627307   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.628343   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.629090   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.631635   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.632400   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:21.627307   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.628343   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.629090   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.631635   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.632400   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:21.638725   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:21.638725   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:21.680001   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:21.680001   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:24.235731   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:24.258332   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:24.285838   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.285838   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:24.289583   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:24.320077   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.320077   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:24.323958   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:24.351529   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.351529   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:24.355109   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:24.382170   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.382170   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:24.385526   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:24.415016   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.415016   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:24.418742   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:24.446275   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.446275   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:24.449841   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:24.475953   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.475953   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:24.475953   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:24.475953   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:24.537960   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:24.537960   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:24.566319   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:24.566319   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:24.648912   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:24.639127   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.640216   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.641694   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.642989   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.643980   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:24.639127   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.640216   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.641694   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.642989   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.643980   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:24.648912   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:24.648912   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:24.689261   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:24.689261   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:27.244212   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:27.265843   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:27.291130   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.291130   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:27.295137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:27.321255   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.321255   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:27.324759   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:27.355906   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.355906   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:27.359611   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:27.386761   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.386761   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:27.390275   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:27.419553   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.419586   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:27.423093   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:27.451634   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.451634   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:27.455077   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:27.485799   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.485799   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:27.485799   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:27.485799   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:27.547830   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:27.547830   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:27.576915   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:27.576915   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:27.661056   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:27.651493   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.652444   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.653497   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.654928   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.656287   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:27.651493   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.652444   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.653497   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.654928   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.656287   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:27.661056   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:27.661056   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:27.700831   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:27.700831   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:30.249035   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:30.271093   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:30.299108   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.299188   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:30.302446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:30.332396   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.332482   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:30.338127   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:30.366185   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.366185   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:30.369711   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:30.400279   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.400279   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:30.404337   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:30.432897   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.432897   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:30.437025   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:30.465969   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.465969   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:30.470356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:30.499169   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.499169   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:30.499169   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:30.499169   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:30.557232   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:30.557232   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:30.584956   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:30.584956   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:30.671890   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:30.661473   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.662403   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.665095   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.667106   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.668354   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:30.661473   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.662403   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.665095   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.667106   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.668354   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:30.671890   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:30.671890   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:30.714351   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:30.714351   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:33.262234   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:33.280780   13524 kubeadm.go:602] duration metric: took 4m2.2739333s to restartPrimaryControlPlane
	W1216 05:06:33.280780   13524 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 05:06:33.285614   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 05:06:33.738970   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:33.760826   13524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:06:33.774044   13524 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:06:33.778124   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:06:33.790578   13524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:06:33.790578   13524 kubeadm.go:158] found existing configuration files:
	
	I1216 05:06:33.794570   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:06:33.806138   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:06:33.810590   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:06:33.828749   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:06:33.841712   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:06:33.846141   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:06:33.862218   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:06:33.872779   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:06:33.877830   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:06:33.893064   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:06:33.905212   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:06:33.909089   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:06:33.925766   13524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:06:34.031218   13524 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 05:06:34.116656   13524 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 05:06:34.211658   13524 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:10:35.264797   13524 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 05:10:35.264797   13524 kubeadm.go:319] 
	I1216 05:10:35.264797   13524 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 05:10:35.269807   13524 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:10:35.269807   13524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:10:35.269807   13524 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:10:35.269807   13524 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 05:10:35.269807   13524 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 05:10:35.270949   13524 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_INET: enabled
	I1216 05:10:35.271576   13524 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 05:10:35.272413   13524 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 05:10:35.272605   13524 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 05:10:35.273278   13524 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 05:10:35.273322   13524 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 05:10:35.273414   13524 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 05:10:35.273503   13524 kubeadm.go:319] OS: Linux
	I1216 05:10:35.273681   13524 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:10:35.273728   13524 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 05:10:35.273769   13524 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:10:35.273813   13524 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:10:35.273855   13524 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:10:35.273913   13524 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:10:35.274000   13524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:10:35.274584   13524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:10:35.274584   13524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:10:35.293047   13524 out.go:252]   - Generating certificates and keys ...
	I1216 05:10:35.293426   13524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:10:35.293599   13524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:10:35.293913   13524 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 05:10:35.294149   13524 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 05:10:35.294885   13524 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 05:10:35.294982   13524 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 05:10:35.295109   13524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:10:35.295195   13524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:10:35.295363   13524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:10:35.295447   13524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:10:35.295612   13524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:10:35.295735   13524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:10:35.295944   13524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:10:35.296070   13524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:10:35.299081   13524 out.go:252]   - Booting up control plane ...
	I1216 05:10:35.299081   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:10:35.300333   13524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:10:35.300908   13524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:10:35.300908   13524 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000864945s
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	I1216 05:10:35.300908   13524 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- The kubelet is not running
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	I1216 05:10:35.300908   13524 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	W1216 05:10:35.301920   13524 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000864945s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 05:10:35.307024   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 05:10:35.771515   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:10:35.789507   13524 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:10:35.793192   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:10:35.806790   13524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:10:35.806790   13524 kubeadm.go:158] found existing configuration files:
	
	I1216 05:10:35.811076   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:10:35.824674   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:10:35.830540   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:10:35.849846   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:10:35.864835   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:10:35.868716   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:10:35.884647   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:10:35.897559   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:10:35.901847   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:10:35.919926   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:10:35.932321   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:10:35.937201   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:10:35.958683   13524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:10:36.010883   13524 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:10:36.010883   13524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:10:36.157778   13524 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:10:36.157778   13524 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 05:10:36.157778   13524 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 05:10:36.158306   13524 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 05:10:36.158377   13524 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 05:10:36.158462   13524 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 05:10:36.158630   13524 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 05:10:36.158749   13524 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 05:10:36.158829   13524 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 05:10:36.158950   13524 kubeadm.go:319] CONFIG_INET: enabled
	I1216 05:10:36.159106   13524 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 05:10:36.159725   13524 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 05:10:36.159807   13524 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 05:10:36.159927   13524 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 05:10:36.160002   13524 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 05:10:36.160137   13524 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 05:10:36.160246   13524 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 05:10:36.160629   13524 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] OS: Linux
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:10:36.160977   13524 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:10:36.160977   13524 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:10:36.161060   13524 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:10:36.161119   13524 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:10:36.161119   13524 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:10:36.161172   13524 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 05:10:36.263883   13524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:10:36.264641   13524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:10:36.264641   13524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:10:36.285337   13524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:10:36.291241   13524 out.go:252]   - Generating certificates and keys ...
	I1216 05:10:36.291368   13524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:10:36.291473   13524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:10:36.291610   13524 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 05:10:36.292292   13524 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 05:10:36.292479   13524 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 05:10:36.292479   13524 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 05:10:36.292479   13524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:10:36.355551   13524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:10:36.426990   13524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:10:36.485556   13524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:10:36.680670   13524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:10:36.834763   13524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:10:36.835291   13524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:10:36.840606   13524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:10:36.844374   13524 out.go:252]   - Booting up control plane ...
	I1216 05:10:36.844573   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:10:36.844629   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:10:36.844629   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:10:36.865895   13524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:10:36.865895   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:10:37.021660   13524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:10:37.022023   13524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:14:36.995901   13524 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000744142s
	I1216 05:14:36.995988   13524 kubeadm.go:319] 
	I1216 05:14:36.996138   13524 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 05:14:36.996214   13524 kubeadm.go:319] 	- The kubelet is not running
	I1216 05:14:36.996375   13524 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 05:14:36.996375   13524 kubeadm.go:319] 
	I1216 05:14:36.996441   13524 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 05:14:36.996441   13524 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 05:14:36.996441   13524 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 05:14:36.996441   13524 kubeadm.go:319] 
	I1216 05:14:37.001376   13524 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 05:14:37.002575   13524 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 05:14:37.002650   13524 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:14:37.002650   13524 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 05:14:37.002650   13524 kubeadm.go:319] 
	I1216 05:14:37.003329   13524 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 05:14:37.003329   13524 kubeadm.go:403] duration metric: took 12m6.0383556s to StartCluster
	I1216 05:14:37.003329   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:14:37.007935   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:14:37.064773   13524 cri.go:89] found id: ""
	I1216 05:14:37.064773   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.064773   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:14:37.064773   13524 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:14:37.069487   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:14:37.111914   13524 cri.go:89] found id: ""
	I1216 05:14:37.111914   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.111914   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:14:37.111914   13524 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:14:37.116663   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:14:37.152644   13524 cri.go:89] found id: ""
	I1216 05:14:37.152667   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.152667   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:14:37.152667   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:14:37.157010   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:14:37.200196   13524 cri.go:89] found id: ""
	I1216 05:14:37.200196   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.200196   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:14:37.200268   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:14:37.204321   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:14:37.243623   13524 cri.go:89] found id: ""
	I1216 05:14:37.243623   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.243623   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:14:37.243623   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:14:37.248366   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:14:37.289277   13524 cri.go:89] found id: ""
	I1216 05:14:37.289277   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.289277   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:14:37.289277   13524 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:14:37.294034   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:14:37.333593   13524 cri.go:89] found id: ""
	I1216 05:14:37.333593   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.333593   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:14:37.333593   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:14:37.333593   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:14:37.417323   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:14:37.408448   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.410123   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.411118   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.412628   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.413931   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:14:37.408448   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.410123   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.411118   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.412628   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.413931   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:14:37.417323   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:14:37.417323   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:14:37.457412   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:14:37.457412   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:14:37.504416   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:14:37.504416   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:14:37.564994   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:14:37.564994   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 05:14:37.597706   13524 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 05:14:37.597706   13524 out.go:285] * 
	W1216 05:14:37.597706   13524 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:14:37.597706   13524 out.go:285] * 
	W1216 05:14:37.600079   13524 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:14:37.606140   13524 out.go:203] 
	W1216 05:14:37.609999   13524 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:14:37.610044   13524 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 05:14:37.610044   13524 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 05:14:37.613011   13524 out.go:203] 
	
	
	==> Docker <==
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685355275Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685360576Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685379878Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685414282Z" level=info msg="Initializing buildkit"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.786375434Z" level=info msg="Completed buildkit initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.794992212Z" level=info msg="Daemon has completed initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151030Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151330Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795240140Z" level=info msg="API listen on [::]:2376"
	Dec 16 05:02:27 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:27 functional-002200 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 05:02:28 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Loaded network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 05:02:28 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:15:33.470160   41248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:15:33.471044   41248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:15:33.473534   41248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:15:33.474522   41248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:15:33.475541   41248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000761] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000760] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000766] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000779] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000780] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 05:02] CPU: 4 PID: 64252 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001010] RIP: 0033:0x7fd1009fcb20
	[  +0.000612] Code: Unable to access opcode bytes at RIP 0x7fd1009fcaf6.
	[  +0.000841] RSP: 002b:00007ffd77dbf430 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000856] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000836] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000800] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000980] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000812] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.813920] CPU: 4 PID: 64374 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f21a7b6eb20
	[  +0.000430] Code: Unable to access opcode bytes at RIP 0x7f21a7b6eaf6.
	[  +0.000678] RSP: 002b:00007ffd5fd33ec0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000807] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000827] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000787] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000791] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000788] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:15:33 up 51 min,  0 user,  load average: 0.18, 0.28, 0.40
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:15:30 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:15:30 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 392.
	Dec 16 05:15:30 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:15:30 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:15:30 functional-002200 kubelet[41090]: E1216 05:15:30.996168   41090 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:15:30 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:15:30 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:15:31 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 393.
	Dec 16 05:15:31 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:15:31 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:15:31 functional-002200 kubelet[41103]: E1216 05:15:31.790526   41103 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:15:31 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:15:31 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:15:32 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 394.
	Dec 16 05:15:32 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:15:32 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:15:32 functional-002200 kubelet[41132]: E1216 05:15:32.509407   41132 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:15:32 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:15:32 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:15:33 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 395.
	Dec 16 05:15:33 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:15:33 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:15:33 functional-002200 kubelet[41183]: E1216 05:15:33.251279   41183 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:15:33 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:15:33 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (566.8523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (53.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (20.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-002200 apply -f testdata\invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-002200 apply -f testdata\invalidsvc.yaml: exit status 1 (20.1970971s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:49316/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-002200 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (20.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (4.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 status: exit status 2 (584.3158ms)

                                                
                                                
-- stdout --
	functional-002200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-002200 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (626.1626ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-002200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 status -o json: exit status 2 (592.9305ms)

                                                
                                                
-- stdout --
	{"Name":"functional-002200","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-002200 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 2 (591.5947ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.0643871s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-002200 service list                                                                                                                            │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ service │ functional-002200 service list -o json                                                                                                                    │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ service │ functional-002200 service --namespace=default --https --url hello-node                                                                                    │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ image   │ functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ service │ functional-002200 service hello-node --url --format={{.IP}}                                                                                               │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ service │ functional-002200 service hello-node --url                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image save kicbase/echo-server:functional-002200 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image rm kicbase/echo-server:functional-002200 --alsologtostderr                                                                        │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image save --daemon kicbase/echo-server:functional-002200 --alsologtostderr                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ ssh     │ functional-002200 ssh echo hello                                                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ ssh     │ functional-002200 ssh cat /etc/hostname                                                                                                                   │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ tunnel  │ functional-002200 tunnel --alsologtostderr                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ tunnel  │ functional-002200 tunnel --alsologtostderr                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ tunnel  │ functional-002200 tunnel --alsologtostderr                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ addons  │ functional-002200 addons list                                                                                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ addons  │ functional-002200 addons list -o json                                                                                                                     │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ start   │ -p functional-002200 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ start   │ -p functional-002200 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0                                                 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:17:01
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:17:01.486200    3868 out.go:360] Setting OutFile to fd 1984 ...
	I1216 05:17:01.528188    3868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:01.528188    3868 out.go:374] Setting ErrFile to fd 820...
	I1216 05:17:01.528188    3868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:01.542464    3868 out.go:368] Setting JSON to false
	I1216 05:17:01.544722    3868 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3243,"bootTime":1765858978,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 05:17:01.544845    3868 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 05:17:01.547896    3868 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 05:17:01.551193    3868 notify.go:221] Checking for updates...
	I1216 05:17:01.551193    3868 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 05:17:01.553365    3868 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:17:01.556402    3868 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 05:17:01.558122    3868 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:17:01.567587    3868 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:17:01.570417    3868 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:17:01.571425    3868 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:17:01.687923    3868 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 05:17:01.691981    3868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:17:01.910454    3868 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 05:17:01.893394254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:17:01.928854    3868 out.go:179] * Using the docker driver based on existing profile
	I1216 05:17:01.935664    3868 start.go:309] selected driver: docker
	I1216 05:17:01.935664    3868 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:17:01.935664    3868 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:17:01.942703    3868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:17:02.186444    3868 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 05:17:02.169844052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:17:02.222382    3868 cni.go:84] Creating CNI manager for ""
	I1216 05:17:02.223379    3868 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 05:17:02.223379    3868 start.go:353] cluster config:
	{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:17:02.227378    3868 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685355275Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685360576Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685379878Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685414282Z" level=info msg="Initializing buildkit"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.786375434Z" level=info msg="Completed buildkit initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.794992212Z" level=info msg="Daemon has completed initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151030Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151330Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795240140Z" level=info msg="API listen on [::]:2376"
	Dec 16 05:02:27 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:27 functional-002200 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 05:02:28 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Loaded network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 05:02:28 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:17:05.659585   44226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:17:05.660495   44226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:17:05.663422   44226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:17:05.664854   44226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:17:05.666286   44226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000761] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000760] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000766] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000779] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000780] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 05:02] CPU: 4 PID: 64252 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001010] RIP: 0033:0x7fd1009fcb20
	[  +0.000612] Code: Unable to access opcode bytes at RIP 0x7fd1009fcaf6.
	[  +0.000841] RSP: 002b:00007ffd77dbf430 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000856] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000836] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000800] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000980] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000812] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.813920] CPU: 4 PID: 64374 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f21a7b6eb20
	[  +0.000430] Code: Unable to access opcode bytes at RIP 0x7f21a7b6eaf6.
	[  +0.000678] RSP: 002b:00007ffd5fd33ec0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000807] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000827] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000787] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000791] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000788] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:17:05 up 53 min,  0 user,  load average: 0.34, 0.33, 0.41
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:17:02 functional-002200 kubelet[44028]: E1216 05:17:02.746602   44028 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:17:02 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:17:02 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:17:03 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 515.
	Dec 16 05:17:03 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:17:03 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:17:03 functional-002200 kubelet[44060]: E1216 05:17:03.476775   44060 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:17:03 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:17:03 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:17:04 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 516.
	Dec 16 05:17:04 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:17:04 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:17:04 functional-002200 kubelet[44090]: E1216 05:17:04.246631   44090 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:17:04 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:17:04 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:17:04 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 517.
	Dec 16 05:17:04 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:17:04 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:17:05 functional-002200 kubelet[44118]: E1216 05:17:05.001107   44118 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:17:05 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:17:05 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:17:05 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 518.
	Dec 16 05:17:05 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:17:05 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:17:05 functional-002200 kubelet[44234]: E1216 05:17:05.763372   44234 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (573.4704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (4.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (122.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-002200 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-002200 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (93.0027ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:49316/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-002200 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-002200 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-002200 describe po hello-node-connect: exit status 1 (50.3215182s)

                                                
                                                
** stderr ** 
	E1216 05:16:49.098758   10040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:59.184349   10040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:17:09.221754   10040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:17:19.256906   10040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:17:29.296663   10040 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-002200 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-002200 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-002200 logs -l app=hello-node-connect: exit status 1 (40.29352s)

                                                
                                                
** stderr ** 
	E1216 05:17:39.428199    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:17:49.510450    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:17:59.552534    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:18:09.592841    4092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-002200 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-002200 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-002200 describe svc hello-node-connect: exit status 1 (29.3762316s)

                                                
                                                
** stderr ** 
	E1216 05:18:19.732454   13536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:18:29.817370   13536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-002200 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 2 (568.3385ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.0520969s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                         │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-002200 image ls                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image          │ functional-002200 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image          │ functional-002200 image ls                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image          │ functional-002200 image save --daemon kicbase/echo-server:functional-002200 --alsologtostderr                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ ssh            │ functional-002200 ssh echo hello                                                                                    │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ ssh            │ functional-002200 ssh cat /etc/hostname                                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ tunnel         │ functional-002200 tunnel --alsologtostderr                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ tunnel         │ functional-002200 tunnel --alsologtostderr                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ tunnel         │ functional-002200 tunnel --alsologtostderr                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ addons         │ functional-002200 addons list                                                                                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ addons         │ functional-002200 addons list -o json                                                                               │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ start          │ -p functional-002200 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ start          │ -p functional-002200 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0           │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ start          │ -p functional-002200 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-002200 --alsologtostderr -v=1                                                      │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ update-context │ functional-002200 update-context --alsologtostderr -v=2                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ update-context │ functional-002200 update-context --alsologtostderr -v=2                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ update-context │ functional-002200 update-context --alsologtostderr -v=2                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ image          │ functional-002200 image ls --format short --alsologtostderr                                                         │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ image          │ functional-002200 image ls --format yaml --alsologtostderr                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ ssh            │ functional-002200 ssh pgrep buildkitd                                                                               │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ image          │ functional-002200 image build -t localhost/my-image:functional-002200 testdata\build --alsologtostderr              │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ image          │ functional-002200 image ls                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ image          │ functional-002200 image ls --format json --alsologtostderr                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ image          │ functional-002200 image ls --format table --alsologtostderr                                                         │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:17:06
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:17:06.453329    5816 out.go:360] Setting OutFile to fd 1532 ...
	I1216 05:17:06.497636    5816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:06.497636    5816 out.go:374] Setting ErrFile to fd 476...
	I1216 05:17:06.497636    5816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:06.512261    5816 out.go:368] Setting JSON to false
	I1216 05:17:06.515710    5816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3248,"bootTime":1765858978,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 05:17:06.515840    5816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 05:17:06.519311    5816 out.go:179] * [functional-002200] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 05:17:06.523675    5816 notify.go:221] Checking for updates...
	I1216 05:17:06.523724    5816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 05:17:06.526347    5816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:17:06.529287    5816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 05:17:06.531703    5816 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:17:06.533890    5816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:17:06.536576    5816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:17:06.537778    5816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:17:06.656791    5816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 05:17:06.660998    5816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:17:06.894343    5816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 05:17:06.877354472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:17:06.902669    5816 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 05:17:06.905232    5816 start.go:309] selected driver: docker
	I1216 05:17:06.905267    5816 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:17:06.905384    5816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:17:06.943599    5816 out.go:203] 
	W1216 05:17:06.945840    5816 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 05:17:06.948775    5816 out.go:203] 
	
	
	==> Docker <==
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685379878Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685414282Z" level=info msg="Initializing buildkit"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.786375434Z" level=info msg="Completed buildkit initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.794992212Z" level=info msg="Daemon has completed initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151030Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151330Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795240140Z" level=info msg="API listen on [::]:2376"
	Dec 16 05:02:27 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:27 functional-002200 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 05:02:28 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Loaded network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 05:02:28 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 05:17:11 functional-002200 dockerd[20947]: 2025/12/16 05:17:11 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 16 05:17:14 functional-002200 dockerd[20947]: time="2025-12-16T05:17:14.053159706Z" level=info msg="sbJoin: gwep4 ''->'93d98a415c4c', gwep6 ''->''"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:18:40.523675   46417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:18:40.524964   46417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:18:40.525903   46417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:18:40.527182   46417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:18:40.528211   46417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000761] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000760] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000766] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000779] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000780] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 05:02] CPU: 4 PID: 64252 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001010] RIP: 0033:0x7fd1009fcb20
	[  +0.000612] Code: Unable to access opcode bytes at RIP 0x7fd1009fcaf6.
	[  +0.000841] RSP: 002b:00007ffd77dbf430 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000856] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000836] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000800] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000980] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000812] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.813920] CPU: 4 PID: 64374 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f21a7b6eb20
	[  +0.000430] Code: Unable to access opcode bytes at RIP 0x7f21a7b6eaf6.
	[  +0.000678] RSP: 002b:00007ffd5fd33ec0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000807] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000827] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000787] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000791] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000788] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:18:40 up 55 min,  0 user,  load average: 0.54, 0.42, 0.44
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:18:37 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:18:37 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 641.
	Dec 16 05:18:37 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:18:37 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:18:37 functional-002200 kubelet[46259]: E1216 05:18:37.993706   46259 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:18:37 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:18:37 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:18:38 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 642.
	Dec 16 05:18:38 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:18:38 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:18:38 functional-002200 kubelet[46271]: E1216 05:18:38.728026   46271 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:18:38 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:18:38 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:18:39 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 643.
	Dec 16 05:18:39 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:18:39 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:18:39 functional-002200 kubelet[46285]: E1216 05:18:39.487207   46285 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:18:39 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:18:39 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:18:40 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 644.
	Dec 16 05:18:40 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:18:40 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:18:40 functional-002200 kubelet[46334]: E1216 05:18:40.256473   46334 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:18:40 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:18:40 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (572.8349ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (122.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (242.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
E1216 05:19:55.680312   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:49316/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (563.261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 2 (572.5727ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.0610796s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                         │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-002200 image ls                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image          │ functional-002200 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image          │ functional-002200 image ls                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image          │ functional-002200 image save --daemon kicbase/echo-server:functional-002200 --alsologtostderr                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ ssh            │ functional-002200 ssh echo hello                                                                                    │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ ssh            │ functional-002200 ssh cat /etc/hostname                                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ tunnel         │ functional-002200 tunnel --alsologtostderr                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ tunnel         │ functional-002200 tunnel --alsologtostderr                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ tunnel         │ functional-002200 tunnel --alsologtostderr                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ addons         │ functional-002200 addons list                                                                                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ addons         │ functional-002200 addons list -o json                                                                               │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ start          │ -p functional-002200 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ start          │ -p functional-002200 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0           │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ start          │ -p functional-002200 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-002200 --alsologtostderr -v=1                                                      │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ update-context │ functional-002200 update-context --alsologtostderr -v=2                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ update-context │ functional-002200 update-context --alsologtostderr -v=2                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ update-context │ functional-002200 update-context --alsologtostderr -v=2                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ image          │ functional-002200 image ls --format short --alsologtostderr                                                         │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ image          │ functional-002200 image ls --format yaml --alsologtostderr                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ ssh            │ functional-002200 ssh pgrep buildkitd                                                                               │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ image          │ functional-002200 image build -t localhost/my-image:functional-002200 testdata\build --alsologtostderr              │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ image          │ functional-002200 image ls                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ image          │ functional-002200 image ls --format json --alsologtostderr                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	│ image          │ functional-002200 image ls --format table --alsologtostderr                                                         │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:17 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:17:06
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:17:06.453329    5816 out.go:360] Setting OutFile to fd 1532 ...
	I1216 05:17:06.497636    5816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:06.497636    5816 out.go:374] Setting ErrFile to fd 476...
	I1216 05:17:06.497636    5816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:06.512261    5816 out.go:368] Setting JSON to false
	I1216 05:17:06.515710    5816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3248,"bootTime":1765858978,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 05:17:06.515840    5816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 05:17:06.519311    5816 out.go:179] * [functional-002200] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 05:17:06.523675    5816 notify.go:221] Checking for updates...
	I1216 05:17:06.523724    5816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 05:17:06.526347    5816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:17:06.529287    5816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 05:17:06.531703    5816 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:17:06.533890    5816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:17:06.536576    5816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:17:06.537778    5816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:17:06.656791    5816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 05:17:06.660998    5816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:17:06.894343    5816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 05:17:06.877354472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:17:06.902669    5816 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 05:17:06.905232    5816 start.go:309] selected driver: docker
	I1216 05:17:06.905267    5816 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:17:06.905384    5816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:17:06.943599    5816 out.go:203] 
	W1216 05:17:06.945840    5816 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 05:17:06.948775    5816 out.go:203] 
	
	
	==> Docker <==
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685379878Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685414282Z" level=info msg="Initializing buildkit"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.786375434Z" level=info msg="Completed buildkit initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.794992212Z" level=info msg="Daemon has completed initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151030Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151330Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795240140Z" level=info msg="API listen on [::]:2376"
	Dec 16 05:02:27 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:27 functional-002200 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 05:02:28 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Loaded network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 05:02:28 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 05:17:11 functional-002200 dockerd[20947]: 2025/12/16 05:17:11 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 16 05:17:14 functional-002200 dockerd[20947]: time="2025-12-16T05:17:14.053159706Z" level=info msg="sbJoin: gwep4 ''->'93d98a415c4c', gwep6 ''->''"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:20:26.014667   48269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:20:26.015967   48269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:20:26.017092   48269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:20:26.018037   48269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:20:26.019582   48269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000761] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000760] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000766] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000779] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000780] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 05:02] CPU: 4 PID: 64252 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001010] RIP: 0033:0x7fd1009fcb20
	[  +0.000612] Code: Unable to access opcode bytes at RIP 0x7fd1009fcaf6.
	[  +0.000841] RSP: 002b:00007ffd77dbf430 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000856] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000836] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000800] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000980] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000812] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.813920] CPU: 4 PID: 64374 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f21a7b6eb20
	[  +0.000430] Code: Unable to access opcode bytes at RIP 0x7f21a7b6eaf6.
	[  +0.000678] RSP: 002b:00007ffd5fd33ec0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000807] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000827] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000787] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000791] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000788] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:20:26 up 56 min,  0 user,  load average: 0.71, 0.45, 0.44
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:20:22 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:20:23 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 782.
	Dec 16 05:20:23 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:20:23 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:20:23 functional-002200 kubelet[48090]: E1216 05:20:23.735054   48090 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:20:23 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:20:23 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:20:24 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 783.
	Dec 16 05:20:24 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:20:24 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:20:24 functional-002200 kubelet[48115]: E1216 05:20:24.451095   48115 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:20:24 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:20:24 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:20:25 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 784.
	Dec 16 05:20:25 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:20:25 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:20:25 functional-002200 kubelet[48145]: E1216 05:20:25.222200   48145 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:20:25 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:20:25 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:20:25 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 785.
	Dec 16 05:20:25 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:20:25 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:20:25 functional-002200 kubelet[48248]: E1216 05:20:25.980120   48248 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:20:25 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:20:25 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (561.1649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (242.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-002200 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-002200 replace --force -f testdata\mysql.yaml: exit status 1 (20.2368474s)

                                                
                                                
** stderr ** 
	E1216 05:16:10.106493    3408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:20.193357    3408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:49316/api?timeout=32s": EOF
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:49316/api?timeout=32s": EOF

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-002200 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 2 (579.1477ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.2782763s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-002200 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ ssh     │ functional-002200 ssh sudo systemctl is-active crio                                                                                                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ license │                                                                                                                                                           │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ service │ functional-002200 service list                                                                                                                            │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ service │ functional-002200 service list -o json                                                                                                                    │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ service │ functional-002200 service --namespace=default --https --url hello-node                                                                                    │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ image   │ functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ service │ functional-002200 service hello-node --url --format={{.IP}}                                                                                               │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ service │ functional-002200 service hello-node --url                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image save kicbase/echo-server:functional-002200 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image rm kicbase/echo-server:functional-002200 --alsologtostderr                                                                        │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image save --daemon kicbase/echo-server:functional-002200 --alsologtostderr                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ ssh     │ functional-002200 ssh echo hello                                                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ ssh     │ functional-002200 ssh cat /etc/hostname                                                                                                                   │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ tunnel  │ functional-002200 tunnel --alsologtostderr                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ tunnel  │ functional-002200 tunnel --alsologtostderr                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ tunnel  │ functional-002200 tunnel --alsologtostderr                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:02:22
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:02:22.143364   13524 out.go:360] Setting OutFile to fd 1016 ...
	I1216 05:02:22.184929   13524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:22.184929   13524 out.go:374] Setting ErrFile to fd 816...
	I1216 05:02:22.184929   13524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:22.200191   13524 out.go:368] Setting JSON to false
	I1216 05:02:22.202193   13524 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2363,"bootTime":1765858978,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 05:02:22.202193   13524 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 05:02:22.207191   13524 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 05:02:22.209167   13524 notify.go:221] Checking for updates...
	I1216 05:02:22.213806   13524 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 05:02:22.217226   13524 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:02:22.219465   13524 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 05:02:22.221726   13524 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:02:22.223984   13524 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:02:22.226535   13524 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:02:22.226535   13524 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:02:22.342632   13524 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 05:02:22.345860   13524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:02:22.582056   13524 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-16 05:02:22.565555373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:02:22.589056   13524 out.go:179] * Using the docker driver based on existing profile
	I1216 05:02:22.591055   13524 start.go:309] selected driver: docker
	I1216 05:02:22.591055   13524 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:22.592055   13524 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:02:22.597056   13524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:02:22.818036   13524 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-16 05:02:22.800509482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:02:22.866190   13524 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:02:22.866190   13524 cni.go:84] Creating CNI manager for ""
	I1216 05:02:22.866190   13524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 05:02:22.866190   13524 start.go:353] cluster config:
	{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:22.870532   13524 out.go:179] * Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	I1216 05:02:22.874014   13524 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 05:02:22.876014   13524 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:02:22.880521   13524 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 05:02:22.880869   13524 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:02:22.880869   13524 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 05:02:22.880869   13524 cache.go:65] Caching tarball of preloaded images
	I1216 05:02:22.880869   13524 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 05:02:22.881393   13524 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 05:02:22.881584   13524 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json ...
	I1216 05:02:22.957945   13524 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:02:22.957945   13524 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:02:22.957945   13524 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:02:22.957945   13524 start.go:360] acquireMachinesLock for functional-002200: {Name:mk1997a1c039b59133e065727b36e0e15b10eb3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:02:22.957945   13524 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-002200"
	I1216 05:02:22.957945   13524 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:02:22.957945   13524 fix.go:54] fixHost starting: 
	I1216 05:02:22.964754   13524 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 05:02:23.020643   13524 fix.go:112] recreateIfNeeded on functional-002200: state=Running err=<nil>
	W1216 05:02:23.020643   13524 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:02:23.024655   13524 out.go:252] * Updating the running docker "functional-002200" container ...
	I1216 05:02:23.024655   13524 machine.go:94] provisionDockerMachine start ...
	I1216 05:02:23.028059   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.089226   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.089720   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.089720   13524 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:02:23.263587   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 05:02:23.263587   13524 ubuntu.go:182] provisioning hostname "functional-002200"
	I1216 05:02:23.269095   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.343706   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.344098   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.344098   13524 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-002200 && echo "functional-002200" | sudo tee /etc/hostname
	I1216 05:02:23.523871   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 05:02:23.527605   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.582373   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.582799   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.582799   13524 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-002200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-002200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-002200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:02:23.744731   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:02:23.744781   13524 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 05:02:23.744810   13524 ubuntu.go:190] setting up certificates
	I1216 05:02:23.744810   13524 provision.go:84] configureAuth start
	I1216 05:02:23.748413   13524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 05:02:23.805299   13524 provision.go:143] copyHostCerts
	I1216 05:02:23.805299   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 05:02:23.805299   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 05:02:23.805870   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 05:02:23.806787   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 05:02:23.806813   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 05:02:23.806957   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 05:02:23.807512   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 05:02:23.807512   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 05:02:23.807512   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 05:02:23.808114   13524 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-002200 san=[127.0.0.1 192.168.49.2 functional-002200 localhost minikube]
	I1216 05:02:24.024499   13524 provision.go:177] copyRemoteCerts
	I1216 05:02:24.027499   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:02:24.030499   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.084455   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:24.207064   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 05:02:24.231047   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 05:02:24.253218   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:02:24.278696   13524 provision.go:87] duration metric: took 533.8823ms to configureAuth
	I1216 05:02:24.278696   13524 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:02:24.279294   13524 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:02:24.283136   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.338661   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.338661   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.338661   13524 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 05:02:24.501259   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 05:02:24.501259   13524 ubuntu.go:71] root file system type: overlay
	I1216 05:02:24.503332   13524 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 05:02:24.506757   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.561628   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.562204   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.562204   13524 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 05:02:24.732222   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 05:02:24.736823   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.789603   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.790705   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.790705   13524 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 05:02:24.956843   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:02:24.956843   13524 machine.go:97] duration metric: took 1.9321739s to provisionDockerMachine
	I1216 05:02:24.956843   13524 start.go:293] postStartSetup for "functional-002200" (driver="docker")
	I1216 05:02:24.956843   13524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:02:24.961328   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:02:24.963780   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.018396   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.151694   13524 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:02:25.159738   13524 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:02:25.159738   13524 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:02:25.159738   13524 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 05:02:25.159738   13524 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 05:02:25.160372   13524 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 05:02:25.161048   13524 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> hosts in /etc/test/nested/copy/11704
	I1216 05:02:25.165137   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11704
	I1216 05:02:25.176929   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 05:02:25.202240   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts --> /etc/test/nested/copy/11704/hosts (40 bytes)
	I1216 05:02:25.226560   13524 start.go:296] duration metric: took 269.6889ms for postStartSetup
	I1216 05:02:25.230465   13524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:02:25.232786   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.287361   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.409366   13524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:02:25.419299   13524 fix.go:56] duration metric: took 2.4613371s for fixHost
	I1216 05:02:25.419299   13524 start.go:83] releasing machines lock for "functional-002200", held for 2.4613371s
	I1216 05:02:25.423876   13524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 05:02:25.479590   13524 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 05:02:25.483988   13524 ssh_runner.go:195] Run: cat /version.json
	I1216 05:02:25.483988   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.487582   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.542893   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.550987   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	W1216 05:02:25.660611   13524 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 05:02:25.682804   13524 ssh_runner.go:195] Run: systemctl --version
	I1216 05:02:25.696301   13524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:02:25.703847   13524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:02:25.708899   13524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:02:25.720784   13524 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:02:25.720820   13524 start.go:496] detecting cgroup driver to use...
	I1216 05:02:25.720861   13524 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 05:02:25.720884   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:02:25.746032   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1216 05:02:25.756672   13524 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 05:02:25.756737   13524 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 05:02:25.764577   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 05:02:25.778652   13524 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 05:02:25.782944   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 05:02:25.802561   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 05:02:25.822362   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 05:02:25.841368   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 05:02:25.860152   13524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:02:25.878804   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 05:02:25.897721   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 05:02:25.916509   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 05:02:25.935848   13524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:02:25.954408   13524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:02:25.972671   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:26.135013   13524 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 05:02:26.286857   13524 start.go:496] detecting cgroup driver to use...
	I1216 05:02:26.286857   13524 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 05:02:26.291710   13524 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 05:02:26.313739   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:02:26.335410   13524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:02:26.394402   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:02:26.416456   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 05:02:26.433425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:02:26.458250   13524 ssh_runner.go:195] Run: which cri-dockerd
	I1216 05:02:26.469192   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 05:02:26.479991   13524 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 05:02:26.508331   13524 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 05:02:26.653923   13524 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 05:02:26.807509   13524 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 05:02:26.808040   13524 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 05:02:26.830421   13524 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 05:02:26.853437   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:26.993507   13524 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 05:02:27.802449   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:02:27.823963   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 05:02:27.846489   13524 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 05:02:27.872589   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 05:02:27.893632   13524 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 05:02:28.032388   13524 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 05:02:28.173426   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:28.303647   13524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 05:02:28.327061   13524 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 05:02:28.347849   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:28.515228   13524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 05:02:28.617223   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 05:02:28.634479   13524 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 05:02:28.638575   13524 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 05:02:28.646251   13524 start.go:564] Will wait 60s for crictl version
	I1216 05:02:28.650257   13524 ssh_runner.go:195] Run: which crictl
	I1216 05:02:28.663129   13524 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:02:28.707678   13524 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 05:02:28.711140   13524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 05:02:28.754899   13524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 05:02:28.798065   13524 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 05:02:28.801328   13524 cli_runner.go:164] Run: docker exec -t functional-002200 dig +short host.docker.internal
	I1216 05:02:28.928679   13524 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 05:02:28.933317   13524 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 05:02:28.945787   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:29.006099   13524 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 05:02:29.009213   13524 kubeadm.go:884] updating cluster {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:02:29.009213   13524 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 05:02:29.012544   13524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 05:02:29.044964   13524 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-002200
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1216 05:02:29.045018   13524 docker.go:621] Images already preloaded, skipping extraction
	I1216 05:02:29.050176   13524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 05:02:29.078871   13524 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-002200
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1216 05:02:29.078871   13524 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:02:29.078871   13524 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1216 05:02:29.078871   13524 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-002200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:02:29.083733   13524 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 05:02:29.153386   13524 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 05:02:29.153441   13524 cni.go:84] Creating CNI manager for ""
	I1216 05:02:29.153441   13524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 05:02:29.153441   13524 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:02:29.153497   13524 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-002200 NodeName:functional-002200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:02:29.153740   13524 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-002200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:02:29.159735   13524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:02:29.170652   13524 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:02:29.175184   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:02:29.187845   13524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 05:02:29.208540   13524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:02:29.226431   13524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1216 05:02:29.250294   13524 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:02:29.261010   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:29.404128   13524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:02:30.007557   13524 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200 for IP: 192.168.49.2
	I1216 05:02:30.007557   13524 certs.go:195] generating shared ca certs ...
	I1216 05:02:30.007557   13524 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:02:30.008172   13524 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 05:02:30.008172   13524 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 05:02:30.008887   13524 certs.go:257] generating profile certs ...
	I1216 05:02:30.013750   13524 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key
	I1216 05:02:30.014359   13524 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742
	I1216 05:02:30.014359   13524 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key
	I1216 05:02:30.014952   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 05:02:30.015510   13524 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 05:02:30.016106   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 05:02:30.016106   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 05:02:30.017231   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:02:30.047196   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 05:02:30.070848   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:02:30.096702   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 05:02:30.121970   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 05:02:30.146884   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:02:30.173170   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:02:30.199629   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:02:30.226778   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 05:02:30.250105   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:02:30.272968   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 05:02:30.298291   13524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:02:30.318635   13524 ssh_runner.go:195] Run: openssl version
	I1216 05:02:30.332668   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.355358   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 05:02:30.372181   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.379909   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.384371   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.432373   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:02:30.447662   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.464870   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:02:30.481196   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.489322   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.492995   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.540388   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:02:30.558567   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.574821   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 05:02:30.592525   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.598815   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.603416   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.650141   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:02:30.666001   13524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:02:30.677986   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:02:30.724950   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:02:30.775114   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:02:30.821700   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:02:30.868594   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:02:30.916597   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:02:30.959171   13524 kubeadm.go:401] StartCluster: {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:30.963942   13524 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 05:02:30.994317   13524 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:02:31.005043   13524 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:02:31.005043   13524 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:02:31.009827   13524 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:02:31.023534   13524 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.026842   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:31.080676   13524 kubeconfig.go:125] found "functional-002200" server: "https://127.0.0.1:49316"
	I1216 05:02:31.087667   13524 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:02:31.101385   13524 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 04:45:52.574738576 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 05:02:29.239240136 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 05:02:31.101385   13524 kubeadm.go:1161] stopping kube-system containers ...
	I1216 05:02:31.105991   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 05:02:31.137859   13524 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 05:02:31.162569   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:02:31.173570   13524 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 04:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 04:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 16 04:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 04:49 /etc/kubernetes/scheduler.conf
	
	I1216 05:02:31.178070   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:02:31.193447   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:02:31.204464   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.208708   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:02:31.223814   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:02:31.236112   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.240050   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:02:31.256323   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:02:31.270390   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.274655   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:02:31.291834   13524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:02:31.309287   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.373785   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.743926   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.973968   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:32.044614   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:32.128503   13524 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:02:32.133080   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:32.634591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:33.135532   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:33.633951   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:34.133670   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:34.636362   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:35.133362   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:35.634567   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:36.133378   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:36.634652   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:37.133364   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:37.635212   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:38.133996   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:38.634136   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:39.133538   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:39.634806   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:40.133591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:40.633797   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:41.133611   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:41.634039   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:42.133614   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:42.634568   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:43.134027   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:43.634254   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:44.133984   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:44.634389   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:45.133761   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:45.634255   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:46.134409   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:46.634402   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:47.133336   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:47.634728   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:48.133723   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:48.634056   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:49.133313   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:49.634057   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:50.134418   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:50.633737   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:51.133246   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:51.634053   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:52.134086   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:52.633592   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:53.134909   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:53.633883   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:54.133900   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:54.633980   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:55.133861   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:55.634905   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:56.133623   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:56.633940   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:57.133423   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:57.635127   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:58.133876   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:58.634340   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:59.133894   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:59.633621   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:00.136295   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:00.633723   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:01.133850   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:01.630633   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:02.135818   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:02.635548   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:03.134173   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:03.634568   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:04.133911   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:04.634440   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:05.133383   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:05.633913   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:06.133618   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:06.635004   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:07.133967   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:07.634270   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:08.133741   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:08.633647   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:09.134149   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:09.634014   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:10.133536   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:10.633733   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:11.134705   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:11.634320   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:12.134680   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:12.634430   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:13.134597   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:13.634710   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:14.134733   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:14.634512   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:15.134218   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:15.633594   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:16.134090   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:16.634446   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:17.134183   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:17.634400   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:18.134566   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:18.633972   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:19.134271   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:19.634238   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:20.134883   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:20.634468   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:21.134017   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:21.634112   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:22.135187   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:22.634480   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:23.134672   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:23.633614   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:24.134339   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:24.634245   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:25.135181   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:25.634475   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:26.134348   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:26.634151   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:27.133880   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:27.633366   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:28.133826   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:28.634409   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:29.133350   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:29.633502   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:30.134183   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:30.633644   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:31.133961   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:31.634081   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:32.132156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:32.161948   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.161948   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:32.165532   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:32.190451   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.190451   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:32.194000   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:32.221132   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.221201   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:32.224735   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:32.251199   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.251265   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:32.254803   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:32.285399   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.285399   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:32.288927   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:32.316407   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.316407   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:32.320399   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:32.348258   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.348330   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:32.348330   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:32.348330   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:32.391508   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:32.391508   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:32.457156   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:32.457156   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:32.517211   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:32.517211   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:32.547816   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:32.547816   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:32.628349   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:32.619255   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.620222   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.621844   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.623691   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.625391   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:32.619255   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.620222   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.621844   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.623691   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.625391   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:35.133793   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:35.155411   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:35.187090   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.187090   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:35.190727   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:35.222945   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.223013   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:35.226777   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:35.253910   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.253910   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:35.257543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:35.284715   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.284715   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:35.288228   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:35.317179   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.317179   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:35.320898   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:35.347702   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.347702   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:35.351146   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:35.380831   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.380865   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:35.380865   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:35.380894   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:35.460624   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:35.451664   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.452907   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.454000   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455064   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455938   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:35.451664   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.452907   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.454000   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455064   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455938   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:35.460624   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:35.460624   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:35.503284   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:35.503284   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:35.556840   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:35.556840   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:35.619567   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:35.619567   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:38.155257   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:38.180004   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:38.207932   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.207932   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:38.211988   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:38.240313   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.240313   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:38.243787   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:38.271584   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.271584   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:38.275398   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:38.302890   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.302890   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:38.308028   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:38.334217   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.334217   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:38.338421   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:38.366179   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.366179   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:38.370864   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:38.399763   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.399763   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:38.399763   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:38.399763   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:38.427010   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:38.427010   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:38.520678   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:38.510609   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.512076   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.514160   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.516365   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.517345   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:38.510609   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.512076   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.514160   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.516365   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.517345   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:38.520678   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:38.520678   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:38.565076   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:38.565076   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:38.618166   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:38.618166   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:41.184770   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:41.209166   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:41.236776   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.236853   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:41.240392   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:41.270413   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.270413   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:41.274447   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:41.299898   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.299898   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:41.303698   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:41.331395   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.331395   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:41.335559   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:41.360930   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.360930   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:41.364502   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:41.391119   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.391119   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:41.394804   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:41.421862   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.421862   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:41.421862   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:41.421862   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:41.485064   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:41.485064   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:41.515166   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:41.515166   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:41.602242   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:41.591320   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.592209   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.596556   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.598648   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.599664   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:41.591320   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.592209   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.596556   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.598648   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.599664   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:41.602283   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:41.602283   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:41.643359   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:41.643359   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:44.196285   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:44.218200   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:44.246503   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.246585   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:44.251156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:44.281646   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.281711   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:44.285404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:44.314582   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.314582   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:44.318424   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:44.345658   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.345658   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:44.349423   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:44.378211   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.378272   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:44.381956   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:44.410544   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.410544   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:44.414620   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:44.445500   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.445500   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:44.445500   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:44.445500   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:44.507872   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:44.507872   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:44.538767   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:44.538767   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:44.622136   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:44.612744   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.613558   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.618132   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.619496   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.620483   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:44.612744   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.613558   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.618132   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.619496   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.620483   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:44.622136   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:44.622136   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:44.663418   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:44.663418   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:47.212335   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:47.235078   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:47.263884   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.263884   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:47.267298   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:47.296349   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.296349   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:47.300145   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:47.328463   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.328463   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:47.332047   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:47.360277   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.360277   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:47.365253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:47.394405   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.394405   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:47.398327   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:47.424342   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.424342   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:47.427553   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:47.457407   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.457407   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:47.457407   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:47.457482   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:47.518376   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:47.518376   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:47.549518   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:47.549518   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:47.633807   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:47.621666   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.623404   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.625321   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.626276   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.627968   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:47.621666   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.623404   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.625321   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.626276   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.627968   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:47.633807   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:47.633807   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:47.677347   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:47.677347   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:50.228661   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:50.251356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:50.280242   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.280242   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:50.284021   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:50.312131   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.312131   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:50.316156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:50.345649   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.345649   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:50.349420   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:50.378641   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.378641   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:50.382647   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:50.412461   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.412461   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:50.416175   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:50.442845   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.442845   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:50.446814   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:50.475928   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.475928   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:50.475928   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:50.475928   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:50.557550   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:50.546013   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.546957   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.548058   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550001   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550942   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:50.546013   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.546957   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.548058   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550001   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550942   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:50.557550   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:50.557550   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:50.598249   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:50.599249   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:50.649236   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:50.649236   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:50.708474   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:50.708474   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:53.243724   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:53.265421   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:53.296102   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.296102   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:53.299979   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:53.326976   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.326976   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:53.330578   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:53.359456   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.359456   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:53.363072   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:53.390071   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.390071   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:53.393691   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:53.420871   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.420871   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:53.424512   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:53.453800   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.453800   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:53.457145   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:53.484517   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.484517   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:53.484517   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:53.484517   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:53.528040   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:53.528040   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:53.587553   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:53.587553   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:53.617548   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:53.617548   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:53.700026   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:53.688408   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.689532   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.690618   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692085   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692939   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:53.688408   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.689532   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.690618   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692085   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692939   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:53.700026   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:53.700026   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:56.246963   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:56.268638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:56.299094   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.299094   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:56.302639   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:56.332517   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.332560   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:56.336308   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:56.365426   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.365426   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:56.369138   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:56.397544   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.397619   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:56.401112   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:56.429549   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.429549   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:56.433429   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:56.460742   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.460742   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:56.464610   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:56.491304   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.491304   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:56.491304   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:56.491304   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:56.537801   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:56.537801   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:56.596883   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:56.596883   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:56.627551   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:56.627551   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:56.716773   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:56.704143   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.705738   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.709298   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.710507   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.711360   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:56.704143   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.705738   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.709298   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.710507   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.711360   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:56.716773   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:56.716773   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:59.265591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:59.287053   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:59.314567   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.314567   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:59.318471   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:59.344778   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.344778   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:59.348198   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:59.377352   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.377352   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:59.381355   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:59.409757   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.409757   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:59.413264   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:59.442030   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.442030   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:59.447566   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:59.476800   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.476800   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:59.480486   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:59.510562   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.510562   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:59.510562   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:59.510562   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:59.594557   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:59.583933   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.585461   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.586898   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.588167   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.589054   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:59.583933   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.585461   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.586898   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.588167   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.589054   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:59.594557   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:59.594557   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:59.635862   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:59.635862   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:59.680837   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:59.680837   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:59.742598   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:59.742598   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:02.276919   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:02.299620   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:02.328580   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.328580   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:02.332001   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:02.362532   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.362532   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:02.367709   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:02.398639   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.398639   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:02.402478   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:02.429515   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.429515   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:02.434024   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:02.462711   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.462771   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:02.465977   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:02.496760   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.496760   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:02.500343   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:02.528038   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.528082   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:02.528082   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:02.528117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:02.591712   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:02.591712   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:02.621318   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:02.621318   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:02.725138   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:02.714257   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.715298   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.717709   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.718396   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.720971   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:02.714257   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.715298   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.717709   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.718396   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.720971   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:02.725138   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:02.725138   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:02.765954   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:02.765954   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:05.326035   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:05.347411   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:05.372745   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.372745   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:05.376358   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:05.403930   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.403930   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:05.406957   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:05.437512   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.437512   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:05.441038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:05.468927   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.468973   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:05.472507   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:05.499239   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.499239   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:05.503303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:05.529451   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.529512   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:05.533654   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:05.561652   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.561652   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:05.561652   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:05.561652   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:05.604232   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:05.604232   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:05.656685   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:05.656714   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:05.718388   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:05.718388   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:05.748808   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:05.748808   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:05.832901   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:05.823709   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.825763   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.826966   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.828014   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.829256   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:05.823709   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.825763   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.826966   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.828014   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.829256   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:08.338915   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:08.361157   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:08.392451   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.392451   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:08.396684   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:08.423351   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.423351   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:08.429970   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:08.457365   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.457365   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:08.460969   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:08.489550   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.489550   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:08.492908   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:08.522740   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.522740   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:08.526558   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:08.555230   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.555230   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:08.558834   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:08.588132   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.588132   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:08.588132   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:08.588132   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:08.648570   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:08.648570   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:08.679084   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:08.679117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:08.767825   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:08.758330   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.759809   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.761174   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763021   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763841   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:08.758330   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.759809   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.761174   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763021   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763841   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:08.767825   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:08.767825   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:08.813493   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:08.813493   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:11.371323   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:11.393671   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:11.423912   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.423912   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:11.426874   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:11.457321   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.457321   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:11.460999   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:11.491719   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.491742   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:11.495112   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:11.524188   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.524188   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:11.530312   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:11.558213   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.558213   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:11.562148   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:11.587695   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.587695   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:11.591166   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:11.618568   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.618568   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:11.618568   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:11.618568   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:11.700342   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:11.691304   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.692191   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.695157   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.697226   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.698318   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:11.691304   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.692191   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.695157   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.697226   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.698318   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:11.700342   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:11.700342   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:11.741856   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:11.741856   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:11.788648   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:11.788648   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:11.849193   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:11.849193   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:14.383220   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:14.404569   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:14.434777   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.434777   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:14.438799   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:14.466806   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.466806   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:14.470274   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:14.496413   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.496413   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:14.500050   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:14.531727   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.531727   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:14.535294   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:14.563393   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.563393   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:14.567315   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:14.592541   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.592541   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:14.596104   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:14.628287   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.628287   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:14.628287   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:14.628287   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:14.692122   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:14.692122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:14.720935   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:14.720935   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:14.809952   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:14.799452   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.800203   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.804585   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.805531   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.807780   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:14.799452   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.800203   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.804585   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.805531   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.807780   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:14.809952   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:14.809952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:14.853842   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:14.853842   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:17.408509   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:17.431899   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:17.459863   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.459863   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:17.463546   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:17.489686   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.489686   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:17.493208   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:17.521484   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.521484   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:17.525013   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:17.552847   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.552847   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:17.556723   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:17.583677   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.583677   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:17.587267   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:17.613916   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.613916   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:17.617383   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:17.649827   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.649827   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:17.649827   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:17.649827   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:17.697170   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:17.697170   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:17.754919   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:17.754919   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:17.784122   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:17.784122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:17.864432   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:17.854159   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.855168   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.856276   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.857030   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.859249   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:17.854159   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.855168   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.856276   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.857030   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.859249   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:17.864463   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:17.864463   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:20.414214   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:20.438174   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:20.468253   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.468253   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:20.471621   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:20.500056   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.500056   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:20.503669   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:20.535901   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.535901   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:20.539210   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:20.566366   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.566366   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:20.570012   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:20.599351   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.599351   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:20.603383   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:20.629474   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.629474   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:20.636460   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:20.662795   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.662795   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:20.662795   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:20.662795   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:20.723615   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:20.723615   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:20.752636   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:20.752636   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:20.837861   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:20.826007   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.827210   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.829865   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.831983   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.832937   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:20.826007   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.827210   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.829865   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.831983   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.832937   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:20.837861   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:20.837861   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:20.879492   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:20.879492   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:23.436591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:23.459603   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:23.484610   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.485910   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:23.489800   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:23.516517   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.516517   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:23.520034   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:23.549815   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.549815   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:23.553056   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:23.583026   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.583026   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:23.586920   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:23.615403   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.615403   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:23.618776   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:23.647271   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.647271   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:23.650983   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:23.677461   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.677520   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:23.677520   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:23.677559   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:23.743913   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:23.743913   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:23.773462   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:23.773462   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:23.862441   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:23.853159   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.854280   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.855338   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.856368   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.857459   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:23.853159   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.854280   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.855338   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.856368   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.857459   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:23.862502   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:23.862526   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:23.903963   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:23.903963   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:26.456802   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:26.479694   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:26.507859   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.507859   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:26.511781   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:26.537683   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.537683   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:26.541445   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:26.569611   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.569611   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:26.573478   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:26.604349   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.604377   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:26.609300   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:26.638784   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.638784   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:26.641986   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:26.669720   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.669720   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:26.673932   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:26.700387   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.700387   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:26.700387   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:26.700387   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:26.766000   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:26.766000   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:26.796095   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:26.796095   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:26.882695   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:26.871861   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.872835   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.874610   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.876128   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.877304   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:26.871861   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.872835   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.874610   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.876128   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.877304   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:26.882695   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:26.882695   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:26.924768   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:26.924768   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:29.478546   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:29.499904   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:29.527110   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.527110   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:29.531186   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:29.558221   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.558221   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:29.561810   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:29.591838   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.591838   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:29.596165   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:29.623642   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.623642   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:29.627192   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:29.652493   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.652526   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:29.655375   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:29.682914   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.682957   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:29.686351   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:29.714123   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.714123   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:29.714123   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:29.714123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:29.774899   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:29.774899   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:29.802342   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:29.802342   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:29.885111   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:29.875757   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.876923   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.877963   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.879017   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.880228   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:29.875757   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.876923   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.877963   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.879017   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.880228   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:29.885242   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:29.885242   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:29.926184   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:29.926184   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:32.480583   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:32.502826   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:32.533439   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.533463   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:32.537047   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:32.564845   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.564845   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:32.568203   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:32.595465   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.595526   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:32.598404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:32.626657   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.626657   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:32.630597   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:32.656354   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.656354   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:32.660989   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:32.690899   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.690920   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:32.693919   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:32.721353   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.721353   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:32.721353   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:32.721353   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:32.783967   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:32.783967   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:32.813914   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:32.813914   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:32.893277   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:32.884279   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.884964   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.887572   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.888527   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.889778   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:32.884279   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.884964   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.887572   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.888527   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.889778   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:32.893277   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:32.893277   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:32.936887   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:32.936887   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:35.508248   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:35.532690   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:35.562568   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.562568   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:35.566845   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:35.593817   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.593817   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:35.597629   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:35.626272   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.626272   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:35.629313   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:35.660523   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.660523   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:35.664731   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:35.696512   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.696512   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:35.699886   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:35.730008   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.730008   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:35.733873   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:35.759351   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.759351   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:35.760366   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:35.760366   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:35.805169   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:35.805169   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:35.871943   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:35.871943   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:35.902094   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:35.902094   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:35.984144   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:35.975517   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.976548   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.977611   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.978767   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.980051   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:35.975517   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.976548   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.977611   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.978767   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.980051   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:35.984671   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:35.984671   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:38.532401   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:38.553975   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:38.587094   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.587163   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:38.590542   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:38.615078   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.615078   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:38.620176   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:38.646601   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.646601   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:38.649820   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:38.678850   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.678850   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:38.681929   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:38.708321   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.708380   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:38.711681   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:38.740769   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.740859   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:38.744600   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:38.773706   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.773706   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:38.773706   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:38.773706   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:38.802001   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:38.802997   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:38.884848   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:38.877013   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.878352   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.879473   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.880593   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.881944   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:38.877013   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.878352   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.879473   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.880593   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.881944   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:38.884848   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:38.884848   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:38.927525   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:38.927525   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:38.973952   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:38.973952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:41.541093   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:41.564290   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:41.592889   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.592889   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:41.597074   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:41.626087   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.626087   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:41.630076   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:41.656581   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.656581   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:41.660739   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:41.689073   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.689073   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:41.692998   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:41.718767   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.718767   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:41.722605   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:41.750884   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.750884   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:41.754652   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:41.780815   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.780815   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:41.780815   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:41.780815   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:41.872864   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:41.862126   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.863102   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.867559   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.868025   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.869518   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:41.862126   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.863102   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.867559   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.868025   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.869518   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:41.872864   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:41.872864   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:41.911229   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:41.911229   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:41.958721   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:41.958721   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:42.017563   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:42.017563   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:44.553294   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:44.576740   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:44.607009   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.607009   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:44.610623   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:44.635971   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.635971   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:44.639338   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:44.664675   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.664675   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:44.667916   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:44.696295   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.696329   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:44.700356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:44.727661   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.727661   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:44.731273   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:44.759144   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.759174   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:44.762982   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:44.790033   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.790033   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:44.790080   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:44.790080   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:44.817221   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:44.817221   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:44.896592   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:44.887275   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.888226   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.890805   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.892527   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.894299   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:44.887275   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.888226   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.890805   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.892527   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.894299   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:44.896592   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:44.896592   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:44.940361   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:44.940361   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:44.989348   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:44.989348   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:47.553461   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:47.576347   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:47.606540   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.606602   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:47.610221   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:47.637575   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.637634   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:47.640884   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:47.669743   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.669743   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:47.673137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:47.702380   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.702380   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:47.706154   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:47.732891   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.732891   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:47.736068   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:47.765439   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.765464   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:47.769425   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:47.799223   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.799223   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:47.799223   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:47.799223   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:47.845720   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:47.846247   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:47.903222   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:47.903222   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:47.932986   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:47.933995   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:48.016069   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:48.005024   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.005860   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.008285   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.009577   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.010646   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:48.005024   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.005860   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.008285   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.009577   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.010646   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:48.016069   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:48.016069   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:50.561698   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:50.585162   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:50.615237   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.615237   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:50.618917   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:50.647113   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.647141   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:50.650625   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:50.677020   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.677020   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:50.680813   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:50.708471   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.708495   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:50.712156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:50.739340   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.739340   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:50.744296   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:50.773916   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.773916   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:50.778432   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:50.806364   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.806443   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:50.806443   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:50.806443   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:50.833814   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:50.833814   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:50.931229   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:50.917758   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.919179   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.923691   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.924605   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.925814   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:50.917758   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.919179   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.923691   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.924605   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.925814   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:50.931285   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:50.931285   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:50.973466   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:50.973466   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:51.020564   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:51.020564   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:53.590321   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:53.613378   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:53.645084   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.645084   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:53.648887   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:53.675145   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.675145   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:53.678830   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:53.704801   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.704801   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:53.708956   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:53.735945   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.736019   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:53.740579   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:53.766771   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.766771   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:53.771626   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:53.799949   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.799949   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:53.804011   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:53.831885   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.831885   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:53.831944   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:53.831944   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:53.878883   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:53.878883   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:53.941915   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:53.941915   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:53.971778   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:53.971778   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:54.047386   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:54.036815   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038092   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038978   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.040350   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.041669   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:54.036815   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038092   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038978   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.040350   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.041669   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:54.047386   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:54.047386   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:56.597206   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:56.623446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:56.654753   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.654783   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:56.657638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:56.687889   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.687889   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:56.691181   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:56.718606   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.718677   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:56.722343   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:56.748289   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.748289   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:56.752614   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:56.782030   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.782030   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:56.785674   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:56.813229   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.813229   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:56.817199   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:56.848354   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.848354   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:56.848354   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:56.848354   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:56.920172   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:56.920172   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:56.950025   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:56.950025   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:57.027703   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:57.017393   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.018120   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.020276   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.021295   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.022786   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:57.017393   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.018120   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.020276   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.021295   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.022786   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:57.027703   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:57.027703   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:57.067904   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:57.067904   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:59.623468   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:59.644700   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:59.675762   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.675762   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:59.679255   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:59.710350   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.710350   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:59.714080   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:59.743398   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.743398   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:59.747303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:59.777836   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.777836   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:59.781321   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:59.806990   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.806990   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:59.811081   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:59.839112   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.839112   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:59.842923   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:59.870519   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.870519   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:59.870519   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:59.870519   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:59.931436   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:59.931436   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:59.961074   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:59.961074   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:00.046620   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:00.036147   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.037355   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.038578   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.039491   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.042183   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:00.036147   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.037355   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.038578   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.039491   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.042183   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:00.046620   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:00.046620   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:00.087812   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:00.087812   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:02.639801   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:02.661744   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:02.693879   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.693879   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:02.697168   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:02.724574   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.724623   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:02.728234   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:02.756463   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.756463   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:02.760215   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:02.785297   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.785297   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:02.789630   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:02.815967   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.815967   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:02.820071   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:02.846212   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.846212   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:02.849605   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:02.880460   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.880501   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:02.880501   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:02.880501   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:02.942651   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:02.942651   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:02.973117   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:02.973117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:03.055647   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:03.045630   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.046516   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.048690   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.049939   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.051104   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:03.045630   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.046516   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.048690   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.049939   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.051104   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:03.055647   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:03.055647   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:03.097391   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:03.097391   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:05.655285   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:05.681408   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:05.711017   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.711017   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:05.714391   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:05.744313   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.744382   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:05.748472   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:05.778641   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.778641   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:05.782574   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:05.808201   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.808201   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:05.811215   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:05.845094   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.845094   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:05.849400   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:05.889250   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.889250   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:05.892728   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:05.921657   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.921657   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:05.921657   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:05.921657   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:05.983252   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:05.983252   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:06.013531   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:06.013531   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:06.094324   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:06.085481   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.087264   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.088438   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.089540   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.090612   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:06.085481   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.087264   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.088438   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.089540   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.090612   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:06.094324   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:06.094324   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:06.136404   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:06.136404   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:08.693146   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:08.716116   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:08.744861   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.744861   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:08.748618   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:08.778582   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.778582   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:08.782132   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:08.810955   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.810955   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:08.814794   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:08.844554   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.844554   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:08.848903   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:08.875472   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.875472   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:08.879360   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:08.907445   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.907445   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:08.911290   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:08.937114   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.937114   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:08.937114   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:08.937114   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:08.999016   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:08.999016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:09.029260   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:09.029260   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:09.117123   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:09.107890   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.109150   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.110216   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.111522   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.112791   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:09.107890   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.109150   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.110216   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.111522   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.112791   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:09.117123   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:09.117123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:09.158878   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:09.158878   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:11.716383   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:11.739574   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:11.772194   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.772194   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:11.776083   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:11.808831   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.808831   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:11.814900   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:11.843123   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.843123   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:11.847084   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:11.877406   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.877406   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:11.883404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:11.909497   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.909497   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:11.915877   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:11.941644   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.941644   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:11.947889   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:11.975058   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.975058   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:11.975058   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:11.975058   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:12.037229   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:12.037229   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:12.066794   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:12.066794   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:12.145714   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:12.137677   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.138809   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.139798   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.141019   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.142446   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:12.137677   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.138809   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.139798   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.141019   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.142446   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:12.145714   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:12.145752   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:12.189122   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:12.189122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:14.741253   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:14.764365   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:14.795995   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.795995   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:14.799654   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:14.827360   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.827360   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:14.830473   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:14.877262   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.877262   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:14.881028   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:14.907013   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.907013   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:14.910966   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:14.940012   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.940012   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:14.943533   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:14.973219   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.973219   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:14.977027   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:15.005016   13524 logs.go:282] 0 containers: []
	W1216 05:05:15.005016   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:15.005016   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:15.005016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:15.068144   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:15.068144   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:15.097979   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:15.097979   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:15.178592   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:15.170495   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.171184   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.173358   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.174428   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.175575   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:15.170495   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.171184   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.173358   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.174428   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.175575   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:15.178592   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:15.178592   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:15.226390   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:15.226390   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:17.780482   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:17.801597   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:17.829508   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.829533   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:17.833177   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:17.859642   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.859642   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:17.862985   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:17.890800   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.890800   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:17.893950   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:17.924358   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.924358   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:17.927717   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:17.953300   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.953300   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:17.957301   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:17.985802   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.985802   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:17.989495   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:18.016952   13524 logs.go:282] 0 containers: []
	W1216 05:05:18.016952   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:18.016952   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:18.016952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:18.106203   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:18.093536   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.094540   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.097011   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.098056   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.099323   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:18.093536   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.094540   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.097011   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.098056   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.099323   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:18.106203   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:18.106203   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:18.149655   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:18.149655   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:18.195681   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:18.195707   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:18.257349   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:18.257349   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:20.791461   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:20.812868   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:20.842707   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.842740   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:20.846536   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:20.875894   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.875894   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:20.879319   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:20.909010   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.909010   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:20.912866   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:20.941362   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.941362   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:20.945334   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:20.973226   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.973226   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:20.977453   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:21.004793   13524 logs.go:282] 0 containers: []
	W1216 05:05:21.004793   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:21.008493   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:21.034240   13524 logs.go:282] 0 containers: []
	W1216 05:05:21.034240   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:21.034240   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:21.034240   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:21.098331   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:21.098331   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:21.129173   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:21.129173   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:21.218614   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:21.206034   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.207338   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.209505   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.211860   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.213420   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:21.206034   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.207338   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.209505   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.211860   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.213420   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:21.218614   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:21.218614   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:21.261020   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:21.261020   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:23.818479   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:23.840022   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:23.873329   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.873385   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:23.877280   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:23.903358   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.903395   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:23.907325   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:23.934336   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.934336   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:23.938027   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:23.966398   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.966398   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:23.969989   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:23.996674   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.996674   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:24.000315   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:24.027001   13524 logs.go:282] 0 containers: []
	W1216 05:05:24.027001   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:24.030715   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:24.059648   13524 logs.go:282] 0 containers: []
	W1216 05:05:24.059648   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:24.059648   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:24.059648   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:24.120785   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:24.120785   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:24.155678   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:24.155678   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:24.234706   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:24.223173   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.224035   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.226157   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.227148   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.228146   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:24.223173   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.224035   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.226157   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.227148   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.228146   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:24.234706   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:24.234706   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:24.278016   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:24.278016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:26.831237   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:26.852827   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:26.880996   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.880996   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:26.884822   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:26.912292   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.912292   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:26.916020   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:26.941600   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.941623   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:26.945391   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:26.972003   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.972068   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:26.975790   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:27.003933   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.003933   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:27.007292   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:27.033829   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.033861   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:27.037496   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:27.065486   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.065486   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:27.065486   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:27.065486   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:27.129425   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:27.129425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:27.158980   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:27.158980   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:27.240946   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:27.230164   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.231001   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.233339   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.234319   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.235558   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:27.230164   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.231001   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.233339   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.234319   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.235558   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:27.240946   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:27.240946   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:27.282635   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:27.282635   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:29.835505   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:29.856873   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:29.887755   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.887755   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:29.891311   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:29.919341   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.919341   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:29.923153   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:29.949569   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.949569   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:29.953446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:29.982150   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.982217   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:29.985852   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:30.012079   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.012079   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:30.017875   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:30.044535   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.044597   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:30.048212   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:30.075190   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.075223   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:30.075223   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:30.075254   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:30.118411   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:30.118411   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:30.169092   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:30.169092   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:30.224666   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:30.224666   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:30.257052   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:30.257052   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:30.345423   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:30.334921   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.335618   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.339017   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.340268   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.341411   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:30.334921   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.335618   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.339017   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.340268   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.341411   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:32.850775   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:32.874038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:32.905193   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.905193   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:32.908688   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:32.935829   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.935829   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:32.939716   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:32.967717   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.967717   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:32.971291   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:32.997404   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.997452   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:33.001346   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:33.033845   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.033845   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:33.037379   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:33.065410   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.065410   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:33.070454   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:33.097202   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.097202   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:33.097202   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:33.097276   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:33.159607   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:33.159607   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:33.190136   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:33.190288   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:33.270012   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:33.258945   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.259847   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.262213   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.263220   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.265983   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:33.258945   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.259847   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.262213   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.263220   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.265983   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:33.270012   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:33.270012   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:33.313088   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:33.313088   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:35.881230   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:35.903303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:35.933399   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.933399   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:35.936917   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:35.963670   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.963670   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:35.967376   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:35.993260   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.993260   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:35.999083   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:36.022547   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.022547   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:36.026765   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:36.058006   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.058006   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:36.061823   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:36.090079   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.090079   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:36.096186   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:36.124272   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.124272   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:36.124343   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:36.124343   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:36.187477   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:36.187477   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:36.217944   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:36.217944   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:36.308580   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:36.295229   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.296002   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.301995   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.302833   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.305048   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:36.295229   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.296002   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.301995   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.302833   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.305048   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:36.308580   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:36.308580   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:36.350059   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:36.350059   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:38.904862   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:38.926217   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:38.956469   13524 logs.go:282] 0 containers: []
	W1216 05:05:38.956469   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:38.959962   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:38.986769   13524 logs.go:282] 0 containers: []
	W1216 05:05:38.986769   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:38.990008   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:39.018465   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.018465   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:39.021941   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:39.050244   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.050244   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:39.054097   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:39.080344   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.080344   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:39.084719   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:39.111908   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.111908   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:39.116234   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:39.145295   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.145295   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:39.145329   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:39.145329   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:39.190461   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:39.190461   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:39.250498   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:39.250498   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:39.281744   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:39.281744   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:39.360278   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:39.352154   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.353091   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.354283   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.355420   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.356645   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:39.352154   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.353091   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.354283   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.355420   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.356645   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:39.360278   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:39.360278   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:41.907417   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:41.930781   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:41.959028   13524 logs.go:282] 0 containers: []
	W1216 05:05:41.959028   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:41.962118   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:41.992218   13524 logs.go:282] 0 containers: []
	W1216 05:05:41.992218   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:41.995638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:42.022706   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.022706   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:42.025963   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:42.058549   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.058591   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:42.063102   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:42.092433   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.092433   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:42.096210   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:42.124136   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.124136   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:42.127883   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:42.157397   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.157397   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:42.157397   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:42.157397   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:42.208439   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:42.208439   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:42.271217   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:42.271217   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:42.299862   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:42.300836   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:42.380228   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:42.370908   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.371801   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.372982   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.375094   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.376194   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:42.370908   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.371801   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.372982   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.375094   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.376194   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:42.380228   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:42.380270   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:44.926983   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:44.949386   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:44.980885   13524 logs.go:282] 0 containers: []
	W1216 05:05:44.980885   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:44.984714   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:45.011775   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.011775   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:45.016515   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:45.044937   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.044937   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:45.048973   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:45.076493   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.076493   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:45.080322   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:45.107894   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.107894   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:45.111226   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:45.140033   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.140033   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:45.145613   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:45.173403   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.173403   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:45.173403   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:45.173403   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:45.234157   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:45.234157   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:45.263615   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:45.263615   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:45.340483   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:45.331453   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.332466   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.333768   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.334753   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.335717   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:45.331453   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.332466   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.333768   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.334753   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.335717   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:45.340483   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:45.340483   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:45.385573   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:45.385573   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:47.944179   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:47.965345   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:47.994755   13524 logs.go:282] 0 containers: []
	W1216 05:05:47.994755   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:47.997830   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:48.025155   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.025155   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:48.028458   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:48.056617   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.056617   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:48.060320   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:48.089066   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.089066   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:48.092698   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:48.121598   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.121628   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:48.125680   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:48.157191   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.157191   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:48.160973   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:48.188668   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.188668   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:48.188668   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:48.188668   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:48.244524   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:48.244524   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:48.275889   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:48.275889   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:48.367425   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:48.355136   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.356146   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.358362   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.360588   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.361743   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:48.355136   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.356146   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.358362   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.360588   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.361743   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:48.367425   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:48.367425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:48.406776   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:48.406776   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:50.963363   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:50.986681   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:51.017484   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.017484   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:51.021749   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:51.049184   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.049184   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:51.052784   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:51.083798   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.083798   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:51.087092   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:51.116150   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.116181   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:51.119540   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:51.148592   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.148592   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:51.152543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:51.182496   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.182496   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:51.186206   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:51.212397   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.212397   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:51.212397   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:51.212397   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:51.294464   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:51.283439   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.284417   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.286178   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.287320   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.289084   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:51.283439   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.284417   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.286178   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.287320   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.289084   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:51.294464   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:51.294464   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:51.336829   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:51.336829   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:51.385258   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:51.385258   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:51.444652   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:51.444652   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:53.980590   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:54.001769   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:54.030775   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.030775   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:54.034817   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:54.062359   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.062385   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:54.065740   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:54.093857   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.093857   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:54.097137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:54.127972   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.127972   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:54.131415   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:54.158859   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.158859   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:54.162622   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:54.192077   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.192077   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:54.195448   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:54.223226   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.223226   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:54.223226   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:54.223226   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:54.267495   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:54.268494   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:54.318458   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:54.318458   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:54.379319   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:54.379319   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:54.409390   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:54.409390   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:54.497343   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:54.486388   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.487502   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.488610   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.489914   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.490890   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:54.486388   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.487502   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.488610   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.489914   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.490890   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:57.001942   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:57.024505   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:57.051420   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.051420   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:57.055095   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:57.086650   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.086650   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:57.090451   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:57.116570   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.116570   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:57.119823   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:57.150064   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.150064   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:57.154328   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:57.180973   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.180973   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:57.185282   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:57.216597   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.216597   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:57.220216   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:57.246877   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.246877   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:57.246945   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:57.246945   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:57.308963   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:57.308963   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:57.340818   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:57.340818   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:57.440976   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:57.429668   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.430817   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.432070   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.433114   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.434207   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:57.429668   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.430817   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.432070   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.433114   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.434207   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:57.440976   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:57.440976   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:57.485863   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:57.485863   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:00.038815   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:00.060757   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:00.089849   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.089849   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:00.093819   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:00.121426   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.121426   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:00.127493   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:00.155063   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.155063   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:00.158469   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:00.186269   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.186269   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:00.191767   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:00.220680   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.220680   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:00.224397   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:00.251492   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.251492   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:00.255561   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:00.282084   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.282084   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:00.282084   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:00.282084   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:00.340687   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:00.340687   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:00.369302   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:00.369302   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:00.450456   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:00.439681   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.441111   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.443533   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.444882   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.446042   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:00.439681   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.441111   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.443533   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.444882   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.446042   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:00.450456   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:00.450456   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:00.494633   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:00.494633   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:03.047228   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:03.070414   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:03.100869   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.100869   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:03.106543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:03.133873   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.133873   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:03.137304   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:03.169605   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.169605   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:03.173548   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:03.203086   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.203086   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:03.206980   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:03.233903   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.233903   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:03.239541   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:03.269916   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.269940   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:03.273671   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:03.301055   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.301055   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:03.301055   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:03.301055   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:03.361314   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:03.361314   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:03.391207   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:03.391207   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:03.477457   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:03.467080   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.468297   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.470723   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.472023   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.473419   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:03.467080   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.468297   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.470723   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.472023   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.473419   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:03.477457   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:03.477457   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:03.517504   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:03.517504   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:06.085750   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:06.108609   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:06.136944   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.136944   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:06.141119   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:06.168680   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.168680   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:06.172752   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:06.201039   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.201039   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:06.204417   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:06.234173   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.234173   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:06.237313   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:06.268910   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.268910   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:06.272680   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:06.302995   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.303025   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:06.306434   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:06.343040   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.343040   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:06.343040   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:06.343040   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:06.404754   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:06.404754   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:06.438236   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:06.438236   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:06.533746   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:06.523818   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.524791   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.526159   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.527425   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.528623   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:06.523818   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.524791   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.526159   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.527425   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.528623   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:06.533746   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:06.533746   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:06.587048   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:06.587048   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:09.143712   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:09.167180   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:09.197847   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.197847   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:09.201143   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:09.231047   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.231047   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:09.234772   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:09.263936   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.263936   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:09.267839   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:09.293408   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.293408   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:09.297079   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:09.325926   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.325926   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:09.329675   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:09.354839   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.354839   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:09.358679   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:09.386294   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.386294   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:09.386294   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:09.386294   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:09.446046   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:09.446046   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:09.474123   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:09.474123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:09.570430   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:09.552344   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.553464   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.562467   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.564909   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.565822   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:09.552344   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.553464   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.562467   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.564909   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.565822   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:09.570430   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:09.570430   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:09.612996   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:09.612996   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:12.162991   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:12.185413   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:12.220706   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.220706   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:12.224471   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:12.252012   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.252085   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:12.255507   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:12.287146   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.287146   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:12.291350   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:12.322209   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.322209   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:12.326285   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:12.352463   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.352463   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:12.356344   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:12.384416   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.384445   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:12.388099   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:12.416249   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.416249   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:12.416249   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:12.416249   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:12.457279   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:12.457279   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:12.504035   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:12.504035   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:12.565073   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:12.565073   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:12.594834   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:12.594834   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:12.671197   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:12.662068   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.663058   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.664278   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.666376   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.667861   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:12.662068   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.663058   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.664278   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.666376   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.667861   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:15.176441   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:15.198949   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:15.228375   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.228375   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:15.232284   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:15.260859   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.260859   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:15.264596   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:15.289482   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.289482   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:15.293332   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:15.321841   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.321889   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:15.325366   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:15.355205   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.355205   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:15.359602   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:15.391155   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.391155   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:15.395288   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:15.422696   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.422696   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:15.422696   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:15.422696   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:15.509885   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:15.501731   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.502732   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.503898   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.505461   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.506268   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:15.501731   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.502732   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.503898   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.505461   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.506268   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:15.509885   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:15.509885   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:15.550722   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:15.550722   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:15.597215   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:15.598218   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:15.655170   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:15.655170   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:18.189600   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:18.214190   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:18.244833   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.244918   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:18.248323   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:18.274826   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.274826   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:18.278263   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:18.305755   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.305755   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:18.310038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:18.339762   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.339762   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:18.343253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:18.372235   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.372235   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:18.376253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:18.405785   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.405785   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:18.410335   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:18.436279   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.436279   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:18.436279   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:18.436279   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:18.477830   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:18.477830   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:18.533284   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:18.533302   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:18.592952   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:18.592952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:18.623173   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:18.623173   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:18.706158   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:18.695935   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.696872   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.699160   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.700510   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.701521   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:18.695935   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.696872   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.699160   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.700510   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.701521   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:21.211431   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:21.233375   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:21.263996   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.263996   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:21.267857   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:21.296614   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.296614   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:21.300408   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:21.327435   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.327435   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:21.331241   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:21.361684   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.361684   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:21.365531   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:21.393896   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.393896   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:21.397371   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:21.427885   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.427885   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:21.431500   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:21.459772   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.459772   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:21.459772   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:21.459772   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:21.522041   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:21.522041   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:21.550901   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:21.550901   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:21.638725   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:21.627307   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.628343   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.629090   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.631635   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.632400   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:21.627307   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.628343   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.629090   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.631635   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.632400   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:21.638725   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:21.638725   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:21.680001   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:21.680001   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:24.235731   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:24.258332   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:24.285838   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.285838   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:24.289583   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:24.320077   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.320077   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:24.323958   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:24.351529   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.351529   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:24.355109   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:24.382170   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.382170   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:24.385526   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:24.415016   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.415016   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:24.418742   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:24.446275   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.446275   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:24.449841   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:24.475953   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.475953   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:24.475953   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:24.475953   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:24.537960   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:24.537960   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:24.566319   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:24.566319   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:24.648912   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:24.639127   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.640216   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.641694   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.642989   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.643980   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:24.639127   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.640216   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.641694   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.642989   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.643980   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:24.648912   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:24.648912   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:24.689261   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:24.689261   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:27.244212   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:27.265843   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:27.291130   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.291130   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:27.295137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:27.321255   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.321255   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:27.324759   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:27.355906   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.355906   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:27.359611   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:27.386761   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.386761   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:27.390275   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:27.419553   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.419586   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:27.423093   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:27.451634   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.451634   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:27.455077   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:27.485799   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.485799   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:27.485799   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:27.485799   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:27.547830   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:27.547830   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:27.576915   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:27.576915   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:27.661056   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:27.651493   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.652444   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.653497   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.654928   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.656287   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:27.651493   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.652444   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.653497   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.654928   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.656287   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:27.661056   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:27.661056   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:27.700831   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:27.700831   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:30.249035   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:30.271093   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:30.299108   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.299188   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:30.302446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:30.332396   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.332482   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:30.338127   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:30.366185   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.366185   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:30.369711   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:30.400279   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.400279   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:30.404337   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:30.432897   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.432897   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:30.437025   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:30.465969   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.465969   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:30.470356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:30.499169   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.499169   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:30.499169   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:30.499169   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:30.557232   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:30.557232   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:30.584956   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:30.584956   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:30.671890   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:30.661473   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.662403   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.665095   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.667106   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.668354   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:30.661473   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.662403   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.665095   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.667106   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.668354   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:30.671890   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:30.671890   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:30.714351   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:30.714351   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:33.262234   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:33.280780   13524 kubeadm.go:602] duration metric: took 4m2.2739333s to restartPrimaryControlPlane
	W1216 05:06:33.280780   13524 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 05:06:33.285614   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 05:06:33.738970   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:33.760826   13524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:06:33.774044   13524 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:06:33.778124   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:06:33.790578   13524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:06:33.790578   13524 kubeadm.go:158] found existing configuration files:
	
	I1216 05:06:33.794570   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:06:33.806138   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:06:33.810590   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:06:33.828749   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:06:33.841712   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:06:33.846141   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:06:33.862218   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:06:33.872779   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:06:33.877830   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:06:33.893064   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:06:33.905212   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:06:33.909089   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:06:33.925766   13524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:06:34.031218   13524 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 05:06:34.116656   13524 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 05:06:34.211658   13524 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:10:35.264797   13524 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 05:10:35.264797   13524 kubeadm.go:319] 
	I1216 05:10:35.264797   13524 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 05:10:35.269807   13524 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:10:35.269807   13524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:10:35.269807   13524 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:10:35.269807   13524 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 05:10:35.269807   13524 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 05:10:35.270949   13524 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_INET: enabled
	I1216 05:10:35.271576   13524 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 05:10:35.272413   13524 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 05:10:35.272605   13524 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 05:10:35.273278   13524 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 05:10:35.273322   13524 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 05:10:35.273414   13524 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 05:10:35.273503   13524 kubeadm.go:319] OS: Linux
	I1216 05:10:35.273681   13524 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:10:35.273728   13524 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 05:10:35.273769   13524 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:10:35.273813   13524 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:10:35.273855   13524 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:10:35.273913   13524 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:10:35.274000   13524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:10:35.274584   13524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:10:35.274584   13524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:10:35.293047   13524 out.go:252]   - Generating certificates and keys ...
	I1216 05:10:35.293426   13524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:10:35.293599   13524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:10:35.293913   13524 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 05:10:35.294149   13524 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 05:10:35.294885   13524 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 05:10:35.294982   13524 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 05:10:35.295109   13524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:10:35.295195   13524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:10:35.295363   13524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:10:35.295447   13524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:10:35.295612   13524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:10:35.295735   13524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:10:35.295944   13524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:10:35.296070   13524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:10:35.299081   13524 out.go:252]   - Booting up control plane ...
	I1216 05:10:35.299081   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:10:35.300333   13524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:10:35.300908   13524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:10:35.300908   13524 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000864945s
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	I1216 05:10:35.300908   13524 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- The kubelet is not running
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	I1216 05:10:35.300908   13524 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	W1216 05:10:35.301920   13524 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000864945s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 05:10:35.307024   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 05:10:35.771515   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:10:35.789507   13524 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:10:35.793192   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:10:35.806790   13524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:10:35.806790   13524 kubeadm.go:158] found existing configuration files:
	
	I1216 05:10:35.811076   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:10:35.824674   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:10:35.830540   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:10:35.849846   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:10:35.864835   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:10:35.868716   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:10:35.884647   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:10:35.897559   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:10:35.901847   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:10:35.919926   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:10:35.932321   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:10:35.937201   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:10:35.958683   13524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:10:36.010883   13524 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:10:36.010883   13524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:10:36.157778   13524 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:10:36.157778   13524 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 05:10:36.157778   13524 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 05:10:36.158306   13524 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 05:10:36.158377   13524 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 05:10:36.158462   13524 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 05:10:36.158630   13524 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 05:10:36.158749   13524 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 05:10:36.158829   13524 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 05:10:36.158950   13524 kubeadm.go:319] CONFIG_INET: enabled
	I1216 05:10:36.159106   13524 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 05:10:36.159725   13524 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 05:10:36.159807   13524 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 05:10:36.159927   13524 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 05:10:36.160002   13524 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 05:10:36.160137   13524 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 05:10:36.160246   13524 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 05:10:36.160629   13524 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] OS: Linux
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:10:36.160977   13524 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:10:36.160977   13524 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:10:36.161060   13524 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:10:36.161119   13524 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:10:36.161119   13524 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:10:36.161172   13524 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 05:10:36.263883   13524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:10:36.264641   13524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:10:36.264641   13524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:10:36.285337   13524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:10:36.291241   13524 out.go:252]   - Generating certificates and keys ...
	I1216 05:10:36.291368   13524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:10:36.291473   13524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:10:36.291610   13524 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 05:10:36.292292   13524 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 05:10:36.292479   13524 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 05:10:36.292479   13524 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 05:10:36.292479   13524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:10:36.355551   13524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:10:36.426990   13524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:10:36.485556   13524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:10:36.680670   13524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:10:36.834763   13524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:10:36.835291   13524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:10:36.840606   13524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:10:36.844374   13524 out.go:252]   - Booting up control plane ...
	I1216 05:10:36.844573   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:10:36.844629   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:10:36.844629   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:10:36.865895   13524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:10:36.865895   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:10:37.021660   13524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:10:37.022023   13524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:14:36.995901   13524 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000744142s
	I1216 05:14:36.995988   13524 kubeadm.go:319] 
	I1216 05:14:36.996138   13524 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 05:14:36.996214   13524 kubeadm.go:319] 	- The kubelet is not running
	I1216 05:14:36.996375   13524 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 05:14:36.996375   13524 kubeadm.go:319] 
	I1216 05:14:36.996441   13524 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 05:14:36.996441   13524 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 05:14:36.996441   13524 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 05:14:36.996441   13524 kubeadm.go:319] 
	I1216 05:14:37.001376   13524 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 05:14:37.002575   13524 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 05:14:37.002650   13524 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:14:37.002650   13524 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 05:14:37.002650   13524 kubeadm.go:319] 
	I1216 05:14:37.003329   13524 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 05:14:37.003329   13524 kubeadm.go:403] duration metric: took 12m6.0383556s to StartCluster
	I1216 05:14:37.003329   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:14:37.007935   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:14:37.064773   13524 cri.go:89] found id: ""
	I1216 05:14:37.064773   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.064773   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:14:37.064773   13524 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:14:37.069487   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:14:37.111914   13524 cri.go:89] found id: ""
	I1216 05:14:37.111914   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.111914   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:14:37.111914   13524 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:14:37.116663   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:14:37.152644   13524 cri.go:89] found id: ""
	I1216 05:14:37.152667   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.152667   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:14:37.152667   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:14:37.157010   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:14:37.200196   13524 cri.go:89] found id: ""
	I1216 05:14:37.200196   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.200196   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:14:37.200268   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:14:37.204321   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:14:37.243623   13524 cri.go:89] found id: ""
	I1216 05:14:37.243623   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.243623   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:14:37.243623   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:14:37.248366   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:14:37.289277   13524 cri.go:89] found id: ""
	I1216 05:14:37.289277   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.289277   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:14:37.289277   13524 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:14:37.294034   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:14:37.333593   13524 cri.go:89] found id: ""
	I1216 05:14:37.333593   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.333593   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:14:37.333593   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:14:37.333593   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:14:37.417323   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:14:37.408448   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.410123   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.411118   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.412628   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.413931   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:14:37.408448   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.410123   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.411118   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.412628   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.413931   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:14:37.417323   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:14:37.417323   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:14:37.457412   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:14:37.457412   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:14:37.504416   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:14:37.504416   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:14:37.564994   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:14:37.564994   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 05:14:37.597706   13524 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 05:14:37.597706   13524 out.go:285] * 
	W1216 05:14:37.597706   13524 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:14:37.597706   13524 out.go:285] * 
	W1216 05:14:37.600079   13524 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:14:37.606140   13524 out.go:203] 
	W1216 05:14:37.609999   13524 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:14:37.610044   13524 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 05:14:37.610044   13524 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 05:14:37.613011   13524 out.go:203] 
	
	
	==> Docker <==
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685355275Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685360576Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685379878Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685414282Z" level=info msg="Initializing buildkit"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.786375434Z" level=info msg="Completed buildkit initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.794992212Z" level=info msg="Daemon has completed initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151030Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151330Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795240140Z" level=info msg="API listen on [::]:2376"
	Dec 16 05:02:27 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:27 functional-002200 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 05:02:28 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Loaded network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 05:02:28 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:16:21.980741   43125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:16:21.981522   43125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:16:21.985182   43125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:16:21.988618   43125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:16:21.989671   43125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000761] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000760] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000766] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000779] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000780] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 05:02] CPU: 4 PID: 64252 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001010] RIP: 0033:0x7fd1009fcb20
	[  +0.000612] Code: Unable to access opcode bytes at RIP 0x7fd1009fcaf6.
	[  +0.000841] RSP: 002b:00007ffd77dbf430 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000856] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000836] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000800] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000980] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000812] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.813920] CPU: 4 PID: 64374 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f21a7b6eb20
	[  +0.000430] Code: Unable to access opcode bytes at RIP 0x7f21a7b6eaf6.
	[  +0.000678] RSP: 002b:00007ffd5fd33ec0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000807] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000827] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000787] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000791] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000788] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:16:22 up 52 min,  0 user,  load average: 0.47, 0.34, 0.42
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:16:18 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:16:19 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 456.
	Dec 16 05:16:19 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:19 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:19 functional-002200 kubelet[42967]: E1216 05:16:19.241954   42967 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:16:19 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:16:19 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:16:19 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 457.
	Dec 16 05:16:19 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:19 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:19 functional-002200 kubelet[42978]: E1216 05:16:19.994213   42978 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:16:19 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:16:19 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:16:20 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 458.
	Dec 16 05:16:20 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:20 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:20 functional-002200 kubelet[42996]: E1216 05:16:20.773599   42996 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:16:20 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:16:20 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:16:21 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 459.
	Dec 16 05:16:21 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:21 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:21 functional-002200 kubelet[43018]: E1216 05:16:21.512122   43018 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:16:21 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:16:21 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (578.311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (53.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-002200 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-002200 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (50.3559914s)

                                                
                                                
** stderr ** 
	E1216 05:16:14.506323    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:24.595702    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:34.641762    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:44.682430    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:54.722831    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-002200 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1216 05:16:14.506323    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:24.595702    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:34.641762    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:44.682430    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:54.722831    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1216 05:16:14.506323    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:24.595702    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:34.641762    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:44.682430    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:54.722831    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1216 05:16:14.506323    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:24.595702    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:34.641762    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:44.682430    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:54.722831    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1216 05:16:14.506323    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:24.595702    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:34.641762    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:44.682430    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:54.722831    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1216 05:16:14.506323    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:24.595702    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:34.641762    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:44.682430    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	E1216 05:16:54.722831    7624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:49316/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-002200
helpers_test.go:244: (dbg) docker inspect functional-002200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c",
	        "Created": "2025-12-16T04:45:40.409756394Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T04:45:40.683377123Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c/5ff1f572999609d86835a0e19cd1b8c96cf4b609adae211f5de1a6cab4332e6c-json.log",
	        "Name": "/functional-002200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-002200:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c9dc876b6d36cfb9187b77e09ee19265207778fe0f3a21a61a0bd9cb2c23ffd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-002200",
	                "Source": "/var/lib/docker/volumes/functional-002200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002200",
	                "name.minikube.sigs.k8s.io": "functional-002200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7062a9743a9f66ea1f6512860196024a39fe3306f73b3133c62afdfa09e50868",
	            "SandboxKey": "/var/run/docker/netns/7062a9743a9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49314"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49315"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49316"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d911cf84f8ad9d0db55a624c23a57a46f13bc633ca481c05a7647fa2f872dd9f",
	                    "EndpointID": "d3be8e1ce24e7fa95a2e9400d8196920dec1c4627e2f342897cb519647e4fadf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002200",
	                        "5ff1f5729996"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-002200 -n functional-002200: exit status 2 (552.3426ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs -n 25: (1.262482s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ license │                                                                                                                                                           │ minikube          │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ service │ functional-002200 service list                                                                                                                            │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ service │ functional-002200 service list -o json                                                                                                                    │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ service │ functional-002200 service --namespace=default --https --url hello-node                                                                                    │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ image   │ functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ service │ functional-002200 service hello-node --url --format={{.IP}}                                                                                               │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ service │ functional-002200 service hello-node --url                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image save kicbase/echo-server:functional-002200 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image rm kicbase/echo-server:functional-002200 --alsologtostderr                                                                        │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image ls                                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ image   │ functional-002200 image save --daemon kicbase/echo-server:functional-002200 --alsologtostderr                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ ssh     │ functional-002200 ssh echo hello                                                                                                                          │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ ssh     │ functional-002200 ssh cat /etc/hostname                                                                                                                   │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ tunnel  │ functional-002200 tunnel --alsologtostderr                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ tunnel  │ functional-002200 tunnel --alsologtostderr                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ tunnel  │ functional-002200 tunnel --alsologtostderr                                                                                                                │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │                     │
	│ addons  │ functional-002200 addons list                                                                                                                             │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	│ addons  │ functional-002200 addons list -o json                                                                                                                     │ functional-002200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:02:22
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:02:22.143364   13524 out.go:360] Setting OutFile to fd 1016 ...
	I1216 05:02:22.184929   13524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:22.184929   13524 out.go:374] Setting ErrFile to fd 816...
	I1216 05:02:22.184929   13524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:02:22.200191   13524 out.go:368] Setting JSON to false
	I1216 05:02:22.202193   13524 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2363,"bootTime":1765858978,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 05:02:22.202193   13524 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 05:02:22.207191   13524 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 05:02:22.209167   13524 notify.go:221] Checking for updates...
	I1216 05:02:22.213806   13524 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 05:02:22.217226   13524 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:02:22.219465   13524 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 05:02:22.221726   13524 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:02:22.223984   13524 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:02:22.226535   13524 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:02:22.226535   13524 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:02:22.342632   13524 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 05:02:22.345860   13524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:02:22.582056   13524 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-16 05:02:22.565555373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:02:22.589056   13524 out.go:179] * Using the docker driver based on existing profile
	I1216 05:02:22.591055   13524 start.go:309] selected driver: docker
	I1216 05:02:22.591055   13524 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:22.592055   13524 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:02:22.597056   13524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:02:22.818036   13524 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-16 05:02:22.800509482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:02:22.866190   13524 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:02:22.866190   13524 cni.go:84] Creating CNI manager for ""
	I1216 05:02:22.866190   13524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 05:02:22.866190   13524 start.go:353] cluster config:
	{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:22.870532   13524 out.go:179] * Starting "functional-002200" primary control-plane node in "functional-002200" cluster
	I1216 05:02:22.874014   13524 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 05:02:22.876014   13524 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 05:02:22.880521   13524 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 05:02:22.880869   13524 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 05:02:22.880869   13524 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 05:02:22.880869   13524 cache.go:65] Caching tarball of preloaded images
	I1216 05:02:22.880869   13524 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 05:02:22.881393   13524 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 05:02:22.881584   13524 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\config.json ...
	I1216 05:02:22.957945   13524 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 05:02:22.957945   13524 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 05:02:22.957945   13524 cache.go:243] Successfully downloaded all kic artifacts
	I1216 05:02:22.957945   13524 start.go:360] acquireMachinesLock for functional-002200: {Name:mk1997a1c039b59133e065727b36e0e15b10eb3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:02:22.957945   13524 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-002200"
	I1216 05:02:22.957945   13524 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:02:22.957945   13524 fix.go:54] fixHost starting: 
	I1216 05:02:22.964754   13524 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
	I1216 05:02:23.020643   13524 fix.go:112] recreateIfNeeded on functional-002200: state=Running err=<nil>
	W1216 05:02:23.020643   13524 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:02:23.024655   13524 out.go:252] * Updating the running docker "functional-002200" container ...
	I1216 05:02:23.024655   13524 machine.go:94] provisionDockerMachine start ...
	I1216 05:02:23.028059   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.089226   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.089720   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.089720   13524 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:02:23.263587   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 05:02:23.263587   13524 ubuntu.go:182] provisioning hostname "functional-002200"
	I1216 05:02:23.269095   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.343706   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.344098   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.344098   13524 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-002200 && echo "functional-002200" | sudo tee /etc/hostname
	I1216 05:02:23.523871   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002200
	
	I1216 05:02:23.527605   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:23.582373   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:23.582799   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:23.582799   13524 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-002200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-002200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-002200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:02:23.744731   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:02:23.744781   13524 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 05:02:23.744810   13524 ubuntu.go:190] setting up certificates
	I1216 05:02:23.744810   13524 provision.go:84] configureAuth start
	I1216 05:02:23.748413   13524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 05:02:23.805299   13524 provision.go:143] copyHostCerts
	I1216 05:02:23.805299   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 05:02:23.805299   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 05:02:23.805870   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 05:02:23.806787   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 05:02:23.806813   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 05:02:23.806957   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 05:02:23.807512   13524 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 05:02:23.807512   13524 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 05:02:23.807512   13524 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 05:02:23.808114   13524 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-002200 san=[127.0.0.1 192.168.49.2 functional-002200 localhost minikube]
	I1216 05:02:24.024499   13524 provision.go:177] copyRemoteCerts
	I1216 05:02:24.027499   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:02:24.030499   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.084455   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:24.207064   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 05:02:24.231047   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 05:02:24.253218   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:02:24.278696   13524 provision.go:87] duration metric: took 533.8823ms to configureAuth
	I1216 05:02:24.278696   13524 ubuntu.go:206] setting minikube options for container-runtime
	I1216 05:02:24.279294   13524 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:02:24.283136   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.338661   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.338661   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.338661   13524 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 05:02:24.501259   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 05:02:24.501259   13524 ubuntu.go:71] root file system type: overlay
	I1216 05:02:24.503332   13524 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 05:02:24.506757   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.561628   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.562204   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.562204   13524 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 05:02:24.732222   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 05:02:24.736823   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:24.789603   13524 main.go:143] libmachine: Using SSH client type: native
	I1216 05:02:24.790705   13524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 49317 <nil> <nil>}
	I1216 05:02:24.790705   13524 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 05:02:24.956843   13524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:02:24.956843   13524 machine.go:97] duration metric: took 1.9321739s to provisionDockerMachine
	I1216 05:02:24.956843   13524 start.go:293] postStartSetup for "functional-002200" (driver="docker")
	I1216 05:02:24.956843   13524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:02:24.961328   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:02:24.963780   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.018396   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.151694   13524 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:02:25.159738   13524 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 05:02:25.159738   13524 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 05:02:25.159738   13524 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 05:02:25.159738   13524 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 05:02:25.160372   13524 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 05:02:25.161048   13524 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts -> hosts in /etc/test/nested/copy/11704
	I1216 05:02:25.165137   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11704
	I1216 05:02:25.176929   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 05:02:25.202240   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts --> /etc/test/nested/copy/11704/hosts (40 bytes)
	I1216 05:02:25.226560   13524 start.go:296] duration metric: took 269.6889ms for postStartSetup
	I1216 05:02:25.230465   13524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:02:25.232786   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.287361   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.409366   13524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 05:02:25.419299   13524 fix.go:56] duration metric: took 2.4613371s for fixHost
	I1216 05:02:25.419299   13524 start.go:83] releasing machines lock for "functional-002200", held for 2.4613371s
	I1216 05:02:25.423876   13524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002200
	I1216 05:02:25.479590   13524 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 05:02:25.483988   13524 ssh_runner.go:195] Run: cat /version.json
	I1216 05:02:25.483988   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.487582   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:25.542893   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	I1216 05:02:25.550987   13524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
	W1216 05:02:25.660611   13524 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 05:02:25.682804   13524 ssh_runner.go:195] Run: systemctl --version
	I1216 05:02:25.696301   13524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:02:25.703847   13524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:02:25.708899   13524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:02:25.720784   13524 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:02:25.720820   13524 start.go:496] detecting cgroup driver to use...
	I1216 05:02:25.720861   13524 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 05:02:25.720884   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:02:25.746032   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1216 05:02:25.756672   13524 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 05:02:25.756737   13524 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 05:02:25.764577   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 05:02:25.778652   13524 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 05:02:25.782944   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 05:02:25.802561   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 05:02:25.822362   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 05:02:25.841368   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 05:02:25.860152   13524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:02:25.878804   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 05:02:25.897721   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 05:02:25.916509   13524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 05:02:25.935848   13524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:02:25.954408   13524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:02:25.972671   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:26.135013   13524 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 05:02:26.286857   13524 start.go:496] detecting cgroup driver to use...
	I1216 05:02:26.286857   13524 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 05:02:26.291710   13524 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 05:02:26.313739   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:02:26.335410   13524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:02:26.394402   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:02:26.416456   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 05:02:26.433425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:02:26.458250   13524 ssh_runner.go:195] Run: which cri-dockerd
	I1216 05:02:26.469192   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 05:02:26.479991   13524 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 05:02:26.508331   13524 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 05:02:26.653923   13524 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 05:02:26.807509   13524 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 05:02:26.808040   13524 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 05:02:26.830421   13524 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 05:02:26.853437   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:26.993507   13524 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 05:02:27.802449   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:02:27.823963   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 05:02:27.846489   13524 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 05:02:27.872589   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 05:02:27.893632   13524 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 05:02:28.032388   13524 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 05:02:28.173426   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:28.303647   13524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 05:02:28.327061   13524 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 05:02:28.347849   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:28.515228   13524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 05:02:28.617223   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 05:02:28.634479   13524 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 05:02:28.638575   13524 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 05:02:28.646251   13524 start.go:564] Will wait 60s for crictl version
	I1216 05:02:28.650257   13524 ssh_runner.go:195] Run: which crictl
	I1216 05:02:28.663129   13524 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 05:02:28.707678   13524 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 05:02:28.711140   13524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 05:02:28.754899   13524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 05:02:28.798065   13524 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 05:02:28.801328   13524 cli_runner.go:164] Run: docker exec -t functional-002200 dig +short host.docker.internal
	I1216 05:02:28.928679   13524 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 05:02:28.933317   13524 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 05:02:28.945787   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:29.006099   13524 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1216 05:02:29.009213   13524 kubeadm.go:884] updating cluster {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:02:29.009213   13524 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 05:02:29.012544   13524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 05:02:29.044964   13524 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-002200
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1216 05:02:29.045018   13524 docker.go:621] Images already preloaded, skipping extraction
	I1216 05:02:29.050176   13524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 05:02:29.078871   13524 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-002200
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1216 05:02:29.078871   13524 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:02:29.078871   13524 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1216 05:02:29.078871   13524 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-002200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:02:29.083733   13524 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 05:02:29.153386   13524 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1216 05:02:29.153441   13524 cni.go:84] Creating CNI manager for ""
	I1216 05:02:29.153441   13524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 05:02:29.153441   13524 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:02:29.153497   13524 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-002200 NodeName:functional-002200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:02:29.153740   13524 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-002200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:02:29.159735   13524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 05:02:29.170652   13524 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:02:29.175184   13524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:02:29.187845   13524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 05:02:29.208540   13524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 05:02:29.226431   13524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1216 05:02:29.250294   13524 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 05:02:29.261010   13524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:02:29.404128   13524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:02:30.007557   13524 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200 for IP: 192.168.49.2
	I1216 05:02:30.007557   13524 certs.go:195] generating shared ca certs ...
	I1216 05:02:30.007557   13524 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:02:30.008172   13524 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 05:02:30.008172   13524 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 05:02:30.008887   13524 certs.go:257] generating profile certs ...
	I1216 05:02:30.013750   13524 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\client.key
	I1216 05:02:30.014359   13524 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key.31248742
	I1216 05:02:30.014359   13524 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key
	I1216 05:02:30.014952   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 05:02:30.015510   13524 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 05:02:30.015510   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 05:02:30.016106   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 05:02:30.016106   13524 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 05:02:30.017231   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:02:30.047196   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 05:02:30.070848   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:02:30.096702   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 05:02:30.121970   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 05:02:30.146884   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:02:30.173170   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:02:30.199629   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-002200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:02:30.226778   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 05:02:30.250105   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:02:30.272968   13524 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 05:02:30.298291   13524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:02:30.318635   13524 ssh_runner.go:195] Run: openssl version
	I1216 05:02:30.332668   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.355358   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 05:02:30.372181   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.379909   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.384371   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 05:02:30.432373   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:02:30.447662   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.464870   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:02:30.481196   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.489322   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.492995   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:02:30.540388   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:02:30.558567   13524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.574821   13524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 05:02:30.592525   13524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.598815   13524 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.603416   13524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 05:02:30.650141   13524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:02:30.666001   13524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:02:30.677986   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:02:30.724950   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:02:30.775114   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:02:30.821700   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:02:30.868594   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:02:30.916597   13524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:02:30.959171   13524 kubeadm.go:401] StartCluster: {Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:02:30.963942   13524 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 05:02:30.994317   13524 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:02:31.005043   13524 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:02:31.005043   13524 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:02:31.009827   13524 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:02:31.023534   13524 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.026842   13524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
	I1216 05:02:31.080676   13524 kubeconfig.go:125] found "functional-002200" server: "https://127.0.0.1:49316"
	I1216 05:02:31.087667   13524 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:02:31.101385   13524 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 04:45:52.574738576 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 05:02:29.239240136 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1216 05:02:31.101385   13524 kubeadm.go:1161] stopping kube-system containers ...
	I1216 05:02:31.105991   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 05:02:31.137859   13524 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 05:02:31.162569   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:02:31.173570   13524 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 04:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 16 04:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 16 04:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 16 04:49 /etc/kubernetes/scheduler.conf
	
	I1216 05:02:31.178070   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:02:31.193447   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:02:31.204464   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.208708   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:02:31.223814   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:02:31.236112   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.240050   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:02:31.256323   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:02:31.270390   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:02:31.274655   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:02:31.291834   13524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:02:31.309287   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.373785   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.743926   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:31.973968   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:32.044614   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:02:32.128503   13524 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:02:32.133080   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:32.634591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:33.135532   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:33.633951   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:34.133670   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:34.636362   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:35.133362   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:35.634567   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:36.133378   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:36.634652   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:37.133364   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:37.635212   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:38.133996   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:38.634136   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:39.133538   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:39.634806   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:40.133591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:40.633797   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:41.133611   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:41.634039   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:42.133614   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:42.634568   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:43.134027   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:43.634254   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:44.133984   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:44.634389   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:45.133761   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:45.634255   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:46.134409   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:46.634402   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:47.133336   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:47.634728   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:48.133723   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:48.634056   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:49.133313   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:49.634057   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:50.134418   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:50.633737   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:51.133246   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:51.634053   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:52.134086   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:52.633592   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:53.134909   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:53.633883   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:54.133900   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:54.633980   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:55.133861   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:55.634905   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:56.133623   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:56.633940   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:57.133423   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:57.635127   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:58.133876   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:58.634340   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:59.133894   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:02:59.633621   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:00.136295   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:00.633723   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:01.133850   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:01.630633   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:02.135818   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:02.635548   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:03.134173   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:03.634568   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:04.133911   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:04.634440   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:05.133383   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:05.633913   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:06.133618   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:06.635004   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:07.133967   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:07.634270   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:08.133741   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:08.633647   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:09.134149   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:09.634014   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:10.133536   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:10.633733   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:11.134705   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:11.634320   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:12.134680   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:12.634430   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:13.134597   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:13.634710   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:14.134733   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:14.634512   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:15.134218   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:15.633594   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:16.134090   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:16.634446   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:17.134183   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:17.634400   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:18.134566   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:18.633972   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:19.134271   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:19.634238   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:20.134883   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:20.634468   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:21.134017   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:21.634112   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:22.135187   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:22.634480   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:23.134672   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:23.633614   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:24.134339   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:24.634245   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:25.135181   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:25.634475   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:26.134348   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:26.634151   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:27.133880   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:27.633366   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:28.133826   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:28.634409   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:29.133350   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:29.633502   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:30.134183   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:30.633644   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:31.133961   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:31.634081   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:32.132156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:32.161948   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.161948   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:32.165532   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:32.190451   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.190451   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:32.194000   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:32.221132   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.221201   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:32.224735   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:32.251199   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.251265   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:32.254803   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:32.285399   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.285399   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:32.288927   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:32.316407   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.316407   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:32.320399   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:32.348258   13524 logs.go:282] 0 containers: []
	W1216 05:03:32.348330   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:32.348330   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:32.348330   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:32.391508   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:32.391508   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:32.457156   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:32.457156   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:32.517211   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:32.517211   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:32.547816   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:32.547816   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:32.628349   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:32.619255   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.620222   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.621844   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.623691   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.625391   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:32.619255   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.620222   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.621844   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.623691   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:32.625391   23024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:35.133793   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:35.155411   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:35.187090   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.187090   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:35.190727   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:35.222945   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.223013   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:35.226777   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:35.253910   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.253910   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:35.257543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:35.284715   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.284715   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:35.288228   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:35.317179   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.317179   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:35.320898   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:35.347702   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.347702   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:35.351146   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:35.380831   13524 logs.go:282] 0 containers: []
	W1216 05:03:35.380865   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:35.380865   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:35.380894   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:35.460624   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:35.451664   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.452907   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.454000   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455064   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455938   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:35.451664   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.452907   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.454000   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455064   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:35.455938   23152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:35.460624   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:35.460624   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:35.503284   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:35.503284   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:35.556840   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:35.556840   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:35.619567   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:35.619567   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:38.155257   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:38.180004   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:38.207932   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.207932   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:38.211988   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:38.240313   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.240313   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:38.243787   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:38.271584   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.271584   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:38.275398   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:38.302890   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.302890   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:38.308028   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:38.334217   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.334217   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:38.338421   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:38.366179   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.366179   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:38.370864   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:38.399763   13524 logs.go:282] 0 containers: []
	W1216 05:03:38.399763   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:38.399763   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:38.399763   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:38.427010   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:38.427010   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:38.520678   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:38.510609   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.512076   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.514160   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.516365   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.517345   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:38.510609   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.512076   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.514160   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.516365   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:38.517345   23307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:38.520678   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:38.520678   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:38.565076   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:38.565076   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:38.618166   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:38.618166   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:41.184770   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:41.209166   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:41.236776   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.236853   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:41.240392   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:41.270413   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.270413   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:41.274447   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:41.299898   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.299898   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:41.303698   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:41.331395   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.331395   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:41.335559   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:41.360930   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.360930   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:41.364502   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:41.391119   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.391119   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:41.394804   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:41.421862   13524 logs.go:282] 0 containers: []
	W1216 05:03:41.421862   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:41.421862   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:41.421862   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:41.485064   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:41.485064   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:41.515166   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:41.515166   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:41.602242   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:41.591320   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.592209   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.596556   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.598648   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.599664   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:41.591320   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.592209   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.596556   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.598648   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:41.599664   23459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:41.602283   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:41.602283   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:41.643359   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:41.643359   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:44.196285   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:44.218200   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:44.246503   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.246585   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:44.251156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:44.281646   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.281711   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:44.285404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:44.314582   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.314582   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:44.318424   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:44.345658   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.345658   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:44.349423   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:44.378211   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.378272   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:44.381956   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:44.410544   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.410544   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:44.414620   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:44.445500   13524 logs.go:282] 0 containers: []
	W1216 05:03:44.445500   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:44.445500   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:44.445500   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:44.507872   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:44.507872   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:44.538767   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:44.538767   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:44.622136   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:44.612744   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.613558   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.618132   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.619496   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.620483   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:44.612744   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.613558   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.618132   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.619496   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:44.620483   23611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:44.622136   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:44.622136   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:44.663418   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:44.663418   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:47.212335   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:47.235078   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:47.263884   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.263884   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:47.267298   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:47.296349   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.296349   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:47.300145   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:47.328463   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.328463   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:47.332047   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:47.360277   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.360277   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:47.365253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:47.394405   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.394405   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:47.398327   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:47.424342   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.424342   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:47.427553   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:47.457407   13524 logs.go:282] 0 containers: []
	W1216 05:03:47.457407   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:47.457407   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:47.457482   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:47.518376   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:47.518376   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:47.549518   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:47.549518   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:47.633807   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:47.621666   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.623404   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.625321   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.626276   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.627968   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:47.621666   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.623404   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.625321   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.626276   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:47.627968   23760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:47.633807   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:47.633807   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:47.677347   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:47.677347   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:50.228661   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:50.251356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:50.280242   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.280242   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:50.284021   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:50.312131   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.312131   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:50.316156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:50.345649   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.345649   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:50.349420   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:50.378641   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.378641   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:50.382647   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:50.412461   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.412461   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:50.416175   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:50.442845   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.442845   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:50.446814   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:50.475928   13524 logs.go:282] 0 containers: []
	W1216 05:03:50.475928   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:50.475928   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:50.475928   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:50.557550   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:50.546013   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.546957   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.548058   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550001   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550942   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:50.546013   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.546957   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.548058   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550001   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:50.550942   23902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:50.557550   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:50.557550   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:50.598249   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:50.599249   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:50.649236   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:50.649236   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:50.708474   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:50.708474   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:53.243724   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:53.265421   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:53.296102   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.296102   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:53.299979   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:53.326976   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.326976   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:53.330578   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:53.359456   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.359456   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:53.363072   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:53.390071   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.390071   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:53.393691   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:53.420871   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.420871   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:53.424512   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:53.453800   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.453800   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:53.457145   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:53.484517   13524 logs.go:282] 0 containers: []
	W1216 05:03:53.484517   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:53.484517   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:53.484517   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:53.528040   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:53.528040   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:53.587553   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:53.587553   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:53.617548   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:53.617548   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:53.700026   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:53.688408   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.689532   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.690618   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692085   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692939   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:53.688408   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.689532   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.690618   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692085   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:53.692939   24074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:53.700026   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:53.700026   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:56.246963   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:56.268638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:56.299094   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.299094   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:56.302639   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:56.332517   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.332560   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:56.336308   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:56.365426   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.365426   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:56.369138   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:56.397544   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.397619   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:56.401112   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:56.429549   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.429549   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:56.433429   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:56.460742   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.460742   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:56.464610   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:56.491304   13524 logs.go:282] 0 containers: []
	W1216 05:03:56.491304   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:56.491304   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:56.491304   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:56.537801   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:56.537801   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:56.596883   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:56.596883   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:03:56.627551   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:56.627551   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:56.716773   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:56.704143   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.705738   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.709298   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.710507   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.711360   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:56.704143   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.705738   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.709298   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.710507   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:56.711360   24220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:56.716773   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:56.716773   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:59.265591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:03:59.287053   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:03:59.314567   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.314567   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:03:59.318471   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:03:59.344778   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.344778   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:03:59.348198   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:03:59.377352   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.377352   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:03:59.381355   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:03:59.409757   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.409757   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:03:59.413264   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:03:59.442030   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.442030   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:03:59.447566   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:03:59.476800   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.476800   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:03:59.480486   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:03:59.510562   13524 logs.go:282] 0 containers: []
	W1216 05:03:59.510562   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:03:59.510562   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:03:59.510562   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:03:59.594557   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:03:59.583933   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.585461   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.586898   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.588167   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.589054   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:03:59.583933   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.585461   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.586898   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.588167   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:03:59.589054   24355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:03:59.594557   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:03:59.594557   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:03:59.635862   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:03:59.635862   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:03:59.680837   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:03:59.680837   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:03:59.742598   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:03:59.742598   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:02.276919   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:02.299620   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:02.328580   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.328580   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:02.332001   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:02.362532   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.362532   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:02.367709   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:02.398639   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.398639   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:02.402478   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:02.429515   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.429515   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:02.434024   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:02.462711   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.462771   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:02.465977   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:02.496760   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.496760   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:02.500343   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:02.528038   13524 logs.go:282] 0 containers: []
	W1216 05:04:02.528082   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:02.528082   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:02.528117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:02.591712   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:02.591712   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:02.621318   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:02.621318   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:02.725138   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:02.714257   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.715298   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.717709   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.718396   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.720971   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:02.714257   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.715298   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.717709   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.718396   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:02.720971   24515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:02.725138   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:02.725138   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:02.765954   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:02.765954   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:05.326035   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:05.347411   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:05.372745   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.372745   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:05.376358   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:05.403930   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.403930   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:05.406957   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:05.437512   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.437512   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:05.441038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:05.468927   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.468973   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:05.472507   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:05.499239   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.499239   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:05.503303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:05.529451   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.529512   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:05.533654   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:05.561652   13524 logs.go:282] 0 containers: []
	W1216 05:04:05.561652   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:05.561652   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:05.561652   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:05.604232   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:05.604232   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:05.656685   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:05.656714   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:05.718388   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:05.718388   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:05.748808   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:05.748808   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:05.832901   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:05.823709   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.825763   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.826966   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.828014   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.829256   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:05.823709   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.825763   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.826966   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.828014   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:05.829256   24690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:08.338915   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:08.361157   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:08.392451   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.392451   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:08.396684   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:08.423351   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.423351   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:08.429970   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:08.457365   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.457365   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:08.460969   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:08.489550   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.489550   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:08.492908   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:08.522740   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.522740   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:08.526558   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:08.555230   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.555230   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:08.558834   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:08.588132   13524 logs.go:282] 0 containers: []
	W1216 05:04:08.588132   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:08.588132   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:08.588132   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:08.648570   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:08.648570   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:08.679084   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:08.679117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:08.767825   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:08.758330   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.759809   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.761174   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763021   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763841   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:08.758330   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.759809   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.761174   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763021   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:08.763841   24817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:08.767825   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:08.767825   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:08.813493   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:08.813493   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:11.371323   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:11.393671   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:11.423912   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.423912   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:11.426874   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:11.457321   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.457321   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:11.460999   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:11.491719   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.491742   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:11.495112   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:11.524188   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.524188   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:11.530312   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:11.558213   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.558213   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:11.562148   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:11.587695   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.587695   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:11.591166   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:11.618568   13524 logs.go:282] 0 containers: []
	W1216 05:04:11.618568   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:11.618568   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:11.618568   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:11.700342   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:11.691304   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.692191   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.695157   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.697226   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.698318   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:11.691304   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.692191   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.695157   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.697226   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:11.698318   24957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:11.700342   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:11.700342   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:11.741856   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:11.741856   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:11.788648   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:11.788648   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:11.849193   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:11.849193   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:14.383220   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:14.404569   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:14.434777   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.434777   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:14.438799   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:14.466806   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.466806   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:14.470274   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:14.496413   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.496413   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:14.500050   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:14.531727   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.531727   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:14.535294   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:14.563393   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.563393   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:14.567315   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:14.592541   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.592541   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:14.596104   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:14.628287   13524 logs.go:282] 0 containers: []
	W1216 05:04:14.628287   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:14.628287   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:14.628287   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:14.692122   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:14.692122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:14.720935   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:14.720935   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:14.809952   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:14.799452   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.800203   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.804585   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.805531   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.807780   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:14.799452   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.800203   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.804585   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.805531   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:14.807780   25117 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:14.809952   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:14.809952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:14.853842   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:14.853842   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:17.408509   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:17.431899   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:17.459863   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.459863   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:17.463546   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:17.489686   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.489686   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:17.493208   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:17.521484   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.521484   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:17.525013   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:17.552847   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.552847   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:17.556723   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:17.583677   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.583677   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:17.587267   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:17.613916   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.613916   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:17.617383   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:17.649827   13524 logs.go:282] 0 containers: []
	W1216 05:04:17.649827   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:17.649827   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:17.649827   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:17.697170   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:17.697170   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:17.754919   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:17.754919   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:17.784122   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:17.784122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:17.864432   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:17.854159   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.855168   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.856276   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.857030   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.859249   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:17.854159   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.855168   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.856276   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.857030   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:17.859249   25297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:17.864463   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:17.864463   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:20.414214   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:20.438174   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:20.468253   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.468253   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:20.471621   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:20.500056   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.500056   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:20.503669   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:20.535901   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.535901   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:20.539210   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:20.566366   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.566366   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:20.570012   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:20.599351   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.599351   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:20.603383   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:20.629474   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.629474   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:20.636460   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:20.662795   13524 logs.go:282] 0 containers: []
	W1216 05:04:20.662795   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:20.662795   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:20.662795   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:20.723615   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:20.723615   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:20.752636   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:20.752636   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:20.837861   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:20.826007   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.827210   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.829865   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.831983   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.832937   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:20.826007   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.827210   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.829865   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.831983   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:20.832937   25431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:20.837861   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:20.837861   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:20.879492   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:20.879492   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:23.436591   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:23.459603   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:23.484610   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.485910   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:23.489800   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:23.516517   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.516517   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:23.520034   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:23.549815   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.549815   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:23.553056   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:23.583026   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.583026   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:23.586920   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:23.615403   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.615403   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:23.618776   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:23.647271   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.647271   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:23.650983   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:23.677461   13524 logs.go:282] 0 containers: []
	W1216 05:04:23.677520   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:23.677520   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:23.677559   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:23.743913   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:23.743913   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:23.773462   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:23.773462   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:23.862441   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:23.853159   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.854280   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.855338   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.856368   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.857459   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:23.853159   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.854280   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.855338   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.856368   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:23.857459   25587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:23.862502   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:23.862526   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:23.903963   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:23.903963   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:26.456802   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:26.479694   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:26.507859   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.507859   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:26.511781   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:26.537683   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.537683   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:26.541445   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:26.569611   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.569611   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:26.573478   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:26.604349   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.604377   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:26.609300   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:26.638784   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.638784   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:26.641986   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:26.669720   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.669720   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:26.673932   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:26.700387   13524 logs.go:282] 0 containers: []
	W1216 05:04:26.700387   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:26.700387   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:26.700387   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:26.766000   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:26.766000   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:26.796095   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:26.796095   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:26.882695   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:26.871861   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.872835   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.874610   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.876128   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.877304   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:26.871861   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.872835   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.874610   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.876128   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:26.877304   25744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:26.882695   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:26.882695   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:26.924768   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:26.924768   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:29.478546   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:29.499904   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:29.527110   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.527110   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:29.531186   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:29.558221   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.558221   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:29.561810   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:29.591838   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.591838   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:29.596165   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:29.623642   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.623642   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:29.627192   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:29.652493   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.652526   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:29.655375   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:29.682914   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.682957   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:29.686351   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:29.714123   13524 logs.go:282] 0 containers: []
	W1216 05:04:29.714123   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:29.714123   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:29.714123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:29.774899   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:29.774899   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:29.802342   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:29.802342   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:29.885111   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:29.875757   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.876923   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.877963   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.879017   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.880228   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:29.875757   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.876923   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.877963   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.879017   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:29.880228   25901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:29.885242   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:29.885242   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:29.926184   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:29.926184   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:32.480583   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:32.502826   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:32.533439   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.533463   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:32.537047   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:32.564845   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.564845   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:32.568203   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:32.595465   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.595526   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:32.598404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:32.626657   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.626657   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:32.630597   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:32.656354   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.656354   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:32.660989   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:32.690899   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.690920   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:32.693919   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:32.721353   13524 logs.go:282] 0 containers: []
	W1216 05:04:32.721353   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:32.721353   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:32.721353   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:32.783967   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:32.783967   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:32.813914   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:32.813914   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:32.893277   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:32.884279   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.884964   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.887572   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.888527   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.889778   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:32.884279   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.884964   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.887572   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.888527   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:32.889778   26051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:32.893277   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:32.893277   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:32.936887   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:32.936887   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:35.508248   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:35.532690   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:35.562568   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.562568   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:35.566845   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:35.593817   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.593817   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:35.597629   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:35.626272   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.626272   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:35.629313   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:35.660523   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.660523   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:35.664731   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:35.696512   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.696512   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:35.699886   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:35.730008   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.730008   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:35.733873   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:35.759351   13524 logs.go:282] 0 containers: []
	W1216 05:04:35.759351   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:35.760366   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:35.760366   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:35.805169   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:35.805169   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:35.871943   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:35.871943   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:35.902094   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:35.902094   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:35.984144   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:35.975517   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.976548   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.977611   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.978767   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.980051   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:35.975517   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.976548   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.977611   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.978767   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:35.980051   26222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:35.984671   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:35.984671   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:38.532401   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:38.553975   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:38.587094   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.587163   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:38.590542   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:38.615078   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.615078   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:38.620176   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:38.646601   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.646601   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:38.649820   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:38.678850   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.678850   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:38.681929   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:38.708321   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.708380   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:38.711681   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:38.740769   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.740859   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:38.744600   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:38.773706   13524 logs.go:282] 0 containers: []
	W1216 05:04:38.773706   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:38.773706   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:38.773706   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:38.802001   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:38.802997   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:38.884848   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:38.877013   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.878352   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.879473   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.880593   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.881944   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:38.877013   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.878352   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.879473   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.880593   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:38.881944   26357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:38.884848   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:38.884848   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:38.927525   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:38.927525   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:38.973952   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:38.973952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:41.541093   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:41.564290   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:41.592889   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.592889   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:41.597074   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:41.626087   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.626087   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:41.630076   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:41.656581   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.656581   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:41.660739   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:41.689073   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.689073   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:41.692998   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:41.718767   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.718767   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:41.722605   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:41.750884   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.750884   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:41.754652   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:41.780815   13524 logs.go:282] 0 containers: []
	W1216 05:04:41.780815   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:41.780815   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:41.780815   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:41.872864   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:41.862126   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.863102   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.867559   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.868025   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.869518   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:41.862126   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.863102   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.867559   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.868025   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:41.869518   26501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:41.872864   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:41.872864   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:41.911229   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:41.911229   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:41.958721   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:41.958721   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:42.017563   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:42.017563   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:44.553294   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:44.576740   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:44.607009   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.607009   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:44.610623   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:44.635971   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.635971   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:44.639338   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:44.664675   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.664675   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:44.667916   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:44.696295   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.696329   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:44.700356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:44.727661   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.727661   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:44.731273   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:44.759144   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.759174   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:44.762982   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:44.790033   13524 logs.go:282] 0 containers: []
	W1216 05:04:44.790033   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:44.790080   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:44.790080   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:44.817221   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:44.817221   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:44.896592   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:44.887275   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.888226   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.890805   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.892527   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.894299   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:44.887275   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.888226   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.890805   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.892527   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:44.894299   26657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:44.896592   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:44.896592   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:44.940361   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:44.940361   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:44.989348   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:44.989348   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:47.553461   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:47.576347   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:47.606540   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.606602   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:47.610221   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:47.637575   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.637634   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:47.640884   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:47.669743   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.669743   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:47.673137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:47.702380   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.702380   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:47.706154   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:47.732891   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.732891   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:47.736068   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:47.765439   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.765464   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:47.769425   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:47.799223   13524 logs.go:282] 0 containers: []
	W1216 05:04:47.799223   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:47.799223   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:47.799223   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:47.845720   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:47.846247   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:47.903222   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:47.903222   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:47.932986   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:47.933995   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:48.016069   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:48.005024   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.005860   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.008285   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.009577   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.010646   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:48.005024   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.005860   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.008285   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.009577   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:48.010646   26825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:48.016069   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:48.016069   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:50.561698   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:50.585162   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:50.615237   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.615237   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:50.618917   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:50.647113   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.647141   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:50.650625   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:50.677020   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.677020   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:50.680813   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:50.708471   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.708495   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:50.712156   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:50.739340   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.739340   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:50.744296   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:50.773916   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.773916   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:50.778432   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:50.806364   13524 logs.go:282] 0 containers: []
	W1216 05:04:50.806443   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:50.806443   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:50.806443   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:50.833814   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:50.833814   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:50.931229   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:50.917758   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.919179   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.923691   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.924605   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.925814   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:50.917758   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.919179   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.923691   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.924605   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:50.925814   26961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:50.931285   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:50.931285   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:50.973466   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:50.973466   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:51.020564   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:51.020564   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:53.590321   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:53.613378   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:53.645084   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.645084   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:53.648887   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:53.675145   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.675145   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:53.678830   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:53.704801   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.704801   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:53.708956   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:53.735945   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.736019   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:53.740579   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:53.766771   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.766771   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:53.771626   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:53.799949   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.799949   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:53.804011   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:53.831885   13524 logs.go:282] 0 containers: []
	W1216 05:04:53.831885   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:53.831944   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:53.831944   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:53.878883   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:53.878883   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:53.941915   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:53.941915   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:53.971778   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:53.971778   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:54.047386   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:54.036815   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038092   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038978   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.040350   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.041669   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:54.036815   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038092   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.038978   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.040350   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:54.041669   27125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:54.047386   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:54.047386   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:56.597206   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:56.623446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:56.654753   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.654783   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:56.657638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:56.687889   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.687889   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:56.691181   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:56.718606   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.718677   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:56.722343   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:56.748289   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.748289   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:56.752614   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:56.782030   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.782030   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:56.785674   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:56.813229   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.813229   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:56.817199   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:56.848354   13524 logs.go:282] 0 containers: []
	W1216 05:04:56.848354   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:56.848354   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:56.848354   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:56.920172   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:56.920172   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:56.950025   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:56.950025   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:04:57.027703   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:04:57.017393   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.018120   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.020276   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.021295   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.022786   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:04:57.017393   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.018120   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.020276   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.021295   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:04:57.022786   27266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:04:57.027703   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:04:57.027703   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:04:57.067904   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:04:57.067904   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:04:59.623468   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:04:59.644700   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:04:59.675762   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.675762   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:04:59.679255   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:04:59.710350   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.710350   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:04:59.714080   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:04:59.743398   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.743398   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:04:59.747303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:04:59.777836   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.777836   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:04:59.781321   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:04:59.806990   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.806990   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:04:59.811081   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:04:59.839112   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.839112   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:04:59.842923   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:04:59.870519   13524 logs.go:282] 0 containers: []
	W1216 05:04:59.870519   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:04:59.870519   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:04:59.870519   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:04:59.931436   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:04:59.931436   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:04:59.961074   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:04:59.961074   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:00.046620   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:00.036147   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.037355   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.038578   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.039491   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.042183   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:00.036147   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.037355   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.038578   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.039491   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:00.042183   27417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:00.046620   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:00.046620   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:00.087812   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:00.087812   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:02.639801   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:02.661744   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:02.693879   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.693879   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:02.697168   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:02.724574   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.724623   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:02.728234   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:02.756463   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.756463   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:02.760215   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:02.785297   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.785297   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:02.789630   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:02.815967   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.815967   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:02.820071   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:02.846212   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.846212   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:02.849605   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:02.880460   13524 logs.go:282] 0 containers: []
	W1216 05:05:02.880501   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:02.880501   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:02.880501   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:02.942651   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:02.942651   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:02.973117   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:02.973117   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:03.055647   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:03.045630   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.046516   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.048690   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.049939   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.051104   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:03.045630   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.046516   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.048690   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.049939   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:03.051104   27570 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:03.055647   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:03.055647   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:03.097391   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:03.097391   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:05.655285   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:05.681408   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:05.711017   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.711017   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:05.714391   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:05.744313   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.744382   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:05.748472   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:05.778641   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.778641   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:05.782574   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:05.808201   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.808201   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:05.811215   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:05.845094   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.845094   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:05.849400   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:05.889250   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.889250   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:05.892728   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:05.921657   13524 logs.go:282] 0 containers: []
	W1216 05:05:05.921657   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:05.921657   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:05.921657   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:05.983252   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:05.983252   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:06.013531   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:06.013531   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:06.094324   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:06.085481   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.087264   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.088438   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.089540   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.090612   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:06.085481   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.087264   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.088438   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.089540   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:06.090612   27720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:06.094324   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:06.094324   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:06.136404   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:06.136404   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:08.693146   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:08.716116   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:08.744861   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.744861   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:08.748618   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:08.778582   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.778582   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:08.782132   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:08.810955   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.810955   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:08.814794   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:08.844554   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.844554   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:08.848903   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:08.875472   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.875472   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:08.879360   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:08.907445   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.907445   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:08.911290   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:08.937114   13524 logs.go:282] 0 containers: []
	W1216 05:05:08.937114   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:08.937114   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:08.937114   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:08.999016   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:08.999016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:09.029260   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:09.029260   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:09.117123   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:09.107890   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.109150   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.110216   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.111522   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.112791   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:09.107890   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.109150   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.110216   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.111522   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:09.112791   27873 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:09.117123   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:09.117123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:09.158878   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:09.158878   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:11.716383   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:11.739574   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:11.772194   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.772194   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:11.776083   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:11.808831   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.808831   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:11.814900   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:11.843123   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.843123   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:11.847084   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:11.877406   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.877406   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:11.883404   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:11.909497   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.909497   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:11.915877   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:11.941644   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.941644   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:11.947889   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:11.975058   13524 logs.go:282] 0 containers: []
	W1216 05:05:11.975058   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:11.975058   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:11.975058   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:12.037229   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:12.037229   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:12.066794   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:12.066794   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:12.145714   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:12.137677   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.138809   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.139798   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.141019   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.142446   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:12.137677   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.138809   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.139798   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.141019   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:12.142446   28024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:12.145714   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:12.145752   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:12.189122   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:12.189122   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:14.741253   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:14.764365   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:14.795995   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.795995   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:14.799654   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:14.827360   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.827360   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:14.830473   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:14.877262   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.877262   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:14.881028   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:14.907013   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.907013   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:14.910966   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:14.940012   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.940012   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:14.943533   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:14.973219   13524 logs.go:282] 0 containers: []
	W1216 05:05:14.973219   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:14.977027   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:15.005016   13524 logs.go:282] 0 containers: []
	W1216 05:05:15.005016   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:15.005016   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:15.005016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:15.068144   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:15.068144   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:15.097979   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:15.097979   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:15.178592   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:15.170495   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.171184   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.173358   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.174428   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.175575   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:15.170495   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.171184   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.173358   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.174428   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:15.175575   28174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:15.178592   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:15.178592   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:15.226390   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:15.226390   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:17.780482   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:17.801597   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:17.829508   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.829533   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:17.833177   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:17.859642   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.859642   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:17.862985   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:17.890800   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.890800   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:17.893950   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:17.924358   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.924358   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:17.927717   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:17.953300   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.953300   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:17.957301   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:17.985802   13524 logs.go:282] 0 containers: []
	W1216 05:05:17.985802   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:17.989495   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:18.016952   13524 logs.go:282] 0 containers: []
	W1216 05:05:18.016952   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:18.016952   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:18.016952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:18.106203   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:18.093536   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.094540   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.097011   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.098056   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.099323   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:18.093536   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.094540   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.097011   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.098056   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:18.099323   28324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:18.106203   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:18.106203   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:18.149655   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:18.149655   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:18.195681   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:18.195707   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:18.257349   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:18.257349   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:20.791461   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:20.812868   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:20.842707   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.842740   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:20.846536   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:20.875894   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.875894   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:20.879319   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:20.909010   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.909010   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:20.912866   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:20.941362   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.941362   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:20.945334   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:20.973226   13524 logs.go:282] 0 containers: []
	W1216 05:05:20.973226   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:20.977453   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:21.004793   13524 logs.go:282] 0 containers: []
	W1216 05:05:21.004793   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:21.008493   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:21.034240   13524 logs.go:282] 0 containers: []
	W1216 05:05:21.034240   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:21.034240   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:21.034240   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:21.098331   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:21.098331   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:21.129173   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:21.129173   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:21.218614   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:21.206034   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.207338   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.209505   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.211860   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.213420   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:21.206034   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.207338   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.209505   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.211860   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:21.213420   28481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:21.218614   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:21.218614   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:21.261020   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:21.261020   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:23.818479   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:23.840022   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:23.873329   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.873385   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:23.877280   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:23.903358   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.903395   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:23.907325   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:23.934336   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.934336   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:23.938027   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:23.966398   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.966398   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:23.969989   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:23.996674   13524 logs.go:282] 0 containers: []
	W1216 05:05:23.996674   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:24.000315   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:24.027001   13524 logs.go:282] 0 containers: []
	W1216 05:05:24.027001   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:24.030715   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:24.059648   13524 logs.go:282] 0 containers: []
	W1216 05:05:24.059648   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:24.059648   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:24.059648   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:24.120785   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:24.120785   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:24.155678   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:24.155678   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:24.234706   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:24.223173   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.224035   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.226157   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.227148   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.228146   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:24.223173   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.224035   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.226157   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.227148   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:24.228146   28633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:24.234706   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:24.234706   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:24.278016   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:24.278016   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:26.831237   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:26.852827   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:26.880996   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.880996   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:26.884822   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:26.912292   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.912292   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:26.916020   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:26.941600   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.941623   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:26.945391   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:26.972003   13524 logs.go:282] 0 containers: []
	W1216 05:05:26.972068   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:26.975790   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:27.003933   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.003933   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:27.007292   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:27.033829   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.033861   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:27.037496   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:27.065486   13524 logs.go:282] 0 containers: []
	W1216 05:05:27.065486   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:27.065486   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:27.065486   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:27.129425   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:27.129425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:27.158980   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:27.158980   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:27.240946   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:27.230164   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.231001   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.233339   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.234319   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.235558   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:27.230164   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.231001   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.233339   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.234319   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:27.235558   28782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:27.240946   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:27.240946   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:27.282635   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:27.282635   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:29.835505   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:29.856873   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:29.887755   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.887755   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:29.891311   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:29.919341   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.919341   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:29.923153   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:29.949569   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.949569   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:29.953446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:29.982150   13524 logs.go:282] 0 containers: []
	W1216 05:05:29.982217   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:29.985852   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:30.012079   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.012079   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:30.017875   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:30.044535   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.044597   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:30.048212   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:30.075190   13524 logs.go:282] 0 containers: []
	W1216 05:05:30.075223   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:30.075223   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:30.075254   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:30.118411   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:30.118411   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:30.169092   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:30.169092   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:30.224666   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:30.224666   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:30.257052   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:30.257052   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:30.345423   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:30.334921   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.335618   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.339017   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.340268   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.341411   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:30.334921   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.335618   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.339017   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.340268   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:30.341411   28945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:32.850775   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:32.874038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:32.905193   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.905193   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:32.908688   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:32.935829   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.935829   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:32.939716   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:32.967717   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.967717   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:32.971291   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:32.997404   13524 logs.go:282] 0 containers: []
	W1216 05:05:32.997452   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:33.001346   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:33.033845   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.033845   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:33.037379   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:33.065410   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.065410   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:33.070454   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:33.097202   13524 logs.go:282] 0 containers: []
	W1216 05:05:33.097202   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:33.097202   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:33.097276   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:33.159607   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:33.159607   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:33.190136   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:33.190288   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:33.270012   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:33.258945   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.259847   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.262213   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.263220   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.265983   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:33.258945   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.259847   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.262213   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.263220   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:33.265983   29081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:33.270012   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:33.270012   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:33.313088   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:33.313088   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:35.881230   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:35.903303   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:35.933399   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.933399   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:35.936917   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:35.963670   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.963670   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:35.967376   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:35.993260   13524 logs.go:282] 0 containers: []
	W1216 05:05:35.993260   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:35.999083   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:36.022547   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.022547   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:36.026765   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:36.058006   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.058006   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:36.061823   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:36.090079   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.090079   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:36.096186   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:36.124272   13524 logs.go:282] 0 containers: []
	W1216 05:05:36.124272   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:36.124343   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:36.124343   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:36.187477   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:36.187477   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:36.217944   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:36.217944   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:36.308580   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:36.295229   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.296002   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.301995   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.302833   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.305048   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:36.295229   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.296002   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.301995   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.302833   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:36.305048   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:36.308580   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:36.308580   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:36.350059   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:36.350059   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:38.904862   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:38.926217   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:38.956469   13524 logs.go:282] 0 containers: []
	W1216 05:05:38.956469   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:38.959962   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:38.986769   13524 logs.go:282] 0 containers: []
	W1216 05:05:38.986769   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:38.990008   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:39.018465   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.018465   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:39.021941   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:39.050244   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.050244   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:39.054097   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:39.080344   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.080344   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:39.084719   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:39.111908   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.111908   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:39.116234   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:39.145295   13524 logs.go:282] 0 containers: []
	W1216 05:05:39.145295   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:39.145329   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:39.145329   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:39.190461   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:39.190461   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:39.250498   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:39.250498   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:39.281744   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:39.281744   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:39.360278   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:39.352154   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.353091   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.354283   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.355420   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.356645   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:39.352154   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.353091   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.354283   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.355420   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:39.356645   29407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:39.360278   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:39.360278   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:41.907417   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:41.930781   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:41.959028   13524 logs.go:282] 0 containers: []
	W1216 05:05:41.959028   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:41.962118   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:41.992218   13524 logs.go:282] 0 containers: []
	W1216 05:05:41.992218   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:41.995638   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:42.022706   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.022706   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:42.025963   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:42.058549   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.058591   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:42.063102   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:42.092433   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.092433   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:42.096210   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:42.124136   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.124136   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:42.127883   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:42.157397   13524 logs.go:282] 0 containers: []
	W1216 05:05:42.157397   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:42.157397   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:42.157397   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:42.208439   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:42.208439   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:42.271217   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:42.271217   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:42.299862   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:42.300836   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:42.380228   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:42.370908   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.371801   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.372982   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.375094   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.376194   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:42.370908   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.371801   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.372982   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.375094   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:42.376194   29558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:42.380228   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:42.380270   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:44.926983   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:44.949386   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:44.980885   13524 logs.go:282] 0 containers: []
	W1216 05:05:44.980885   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:44.984714   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:45.011775   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.011775   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:45.016515   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:45.044937   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.044937   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:45.048973   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:45.076493   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.076493   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:45.080322   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:45.107894   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.107894   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:45.111226   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:45.140033   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.140033   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:45.145613   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:45.173403   13524 logs.go:282] 0 containers: []
	W1216 05:05:45.173403   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:45.173403   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:45.173403   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:45.234157   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:45.234157   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:45.263615   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:45.263615   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:45.340483   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:45.331453   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.332466   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.333768   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.334753   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.335717   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:45.331453   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.332466   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.333768   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.334753   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:45.335717   29701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:45.340483   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:45.340483   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:45.385573   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:45.385573   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:47.944179   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:47.965345   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:47.994755   13524 logs.go:282] 0 containers: []
	W1216 05:05:47.994755   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:47.997830   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:48.025155   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.025155   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:48.028458   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:48.056617   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.056617   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:48.060320   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:48.089066   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.089066   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:48.092698   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:48.121598   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.121628   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:48.125680   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:48.157191   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.157191   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:48.160973   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:48.188668   13524 logs.go:282] 0 containers: []
	W1216 05:05:48.188668   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:48.188668   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:48.188668   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:48.244524   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:48.244524   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:48.275889   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:48.275889   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:48.367425   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:48.355136   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.356146   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.358362   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.360588   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.361743   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:48.355136   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.356146   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.358362   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.360588   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:48.361743   29849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:48.367425   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:48.367425   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:48.406776   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:48.406776   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:50.963363   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:50.986681   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:51.017484   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.017484   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:51.021749   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:51.049184   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.049184   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:51.052784   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:51.083798   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.083798   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:51.087092   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:51.116150   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.116181   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:51.119540   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:51.148592   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.148592   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:51.152543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:51.182496   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.182496   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:51.186206   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:51.212397   13524 logs.go:282] 0 containers: []
	W1216 05:05:51.212397   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:51.212397   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:51.212397   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:51.294464   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:51.283439   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.284417   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.286178   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.287320   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.289084   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:51.283439   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.284417   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.286178   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.287320   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:51.289084   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:51.294464   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:51.294464   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:51.336829   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:51.336829   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:51.385258   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:51.385258   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:51.444652   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:51.444652   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:53.980590   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:54.001769   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:54.030775   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.030775   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:54.034817   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:54.062359   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.062385   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:54.065740   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:54.093857   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.093857   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:54.097137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:54.127972   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.127972   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:54.131415   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:54.158859   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.158859   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:54.162622   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:54.192077   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.192077   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:54.195448   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:54.223226   13524 logs.go:282] 0 containers: []
	W1216 05:05:54.223226   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:54.223226   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:54.223226   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:54.267495   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:54.268494   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:05:54.318458   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:54.318458   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:54.379319   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:54.379319   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:54.409390   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:54.409390   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:54.497343   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:54.486388   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.487502   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.488610   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.489914   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.490890   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:54.486388   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.487502   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.488610   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.489914   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:54.490890   30163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:57.001942   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:05:57.024505   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:05:57.051420   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.051420   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:05:57.055095   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:05:57.086650   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.086650   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:05:57.090451   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:05:57.116570   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.116570   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:05:57.119823   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:05:57.150064   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.150064   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:05:57.154328   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:05:57.180973   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.180973   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:05:57.185282   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:05:57.216597   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.216597   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:05:57.220216   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:05:57.246877   13524 logs.go:282] 0 containers: []
	W1216 05:05:57.246877   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:05:57.246945   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:05:57.246945   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:05:57.308963   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:05:57.308963   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:05:57.340818   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:05:57.340818   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:05:57.440976   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:05:57.429668   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.430817   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.432070   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.433114   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.434207   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:05:57.429668   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.430817   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.432070   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.433114   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:05:57.434207   30297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:05:57.440976   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:05:57.440976   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:05:57.485863   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:05:57.485863   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:00.038815   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:00.060757   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:00.089849   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.089849   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:00.093819   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:00.121426   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.121426   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:00.127493   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:00.155063   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.155063   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:00.158469   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:00.186269   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.186269   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:00.191767   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:00.220680   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.220680   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:00.224397   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:00.251492   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.251492   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:00.255561   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:00.282084   13524 logs.go:282] 0 containers: []
	W1216 05:06:00.282084   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:00.282084   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:00.282084   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:00.340687   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:00.340687   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:00.369302   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:00.369302   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:00.450456   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:00.439681   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.441111   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.443533   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.444882   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.446042   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:00.439681   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.441111   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.443533   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.444882   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:00.446042   30450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:00.450456   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:00.450456   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:00.494633   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:00.494633   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:03.047228   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:03.070414   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:03.100869   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.100869   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:03.106543   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:03.133873   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.133873   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:03.137304   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:03.169605   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.169605   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:03.173548   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:03.203086   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.203086   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:03.206980   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:03.233903   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.233903   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:03.239541   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:03.269916   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.269940   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:03.273671   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:03.301055   13524 logs.go:282] 0 containers: []
	W1216 05:06:03.301055   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:03.301055   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:03.301055   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:03.361314   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:03.361314   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:03.391207   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:03.391207   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:03.477457   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:03.467080   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.468297   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.470723   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.472023   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.473419   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:03.467080   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.468297   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.470723   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.472023   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:03.473419   30603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:03.477457   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:03.477457   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:03.517504   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:03.517504   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:06.085750   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:06.108609   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:06.136944   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.136944   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:06.141119   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:06.168680   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.168680   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:06.172752   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:06.201039   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.201039   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:06.204417   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:06.234173   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.234173   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:06.237313   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:06.268910   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.268910   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:06.272680   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:06.302995   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.303025   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:06.306434   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:06.343040   13524 logs.go:282] 0 containers: []
	W1216 05:06:06.343040   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:06.343040   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:06.343040   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:06.404754   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:06.404754   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:06.438236   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:06.438236   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:06.533746   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:06.523818   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.524791   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.526159   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.527425   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.528623   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:06.523818   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.524791   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.526159   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.527425   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:06.528623   30754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:06.533746   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:06.533746   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:06.587048   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:06.587048   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:09.143712   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:09.167180   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:09.197847   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.197847   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:09.201143   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:09.231047   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.231047   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:09.234772   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:09.263936   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.263936   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:09.267839   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:09.293408   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.293408   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:09.297079   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:09.325926   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.325926   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:09.329675   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:09.354839   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.354839   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:09.358679   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:09.386294   13524 logs.go:282] 0 containers: []
	W1216 05:06:09.386294   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:09.386294   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:09.386294   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:09.446046   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:09.446046   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:09.474123   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:09.474123   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:09.570430   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:09.552344   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.553464   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.562467   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.564909   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.565822   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:09.552344   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.553464   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.562467   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.564909   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:09.565822   30909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:09.570430   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:09.570430   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:09.612996   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:09.612996   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:12.162991   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:12.185413   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:12.220706   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.220706   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:12.224471   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:12.252012   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.252085   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:12.255507   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:12.287146   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.287146   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:12.291350   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:12.322209   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.322209   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:12.326285   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:12.352463   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.352463   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:12.356344   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:12.384416   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.384445   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:12.388099   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:12.416249   13524 logs.go:282] 0 containers: []
	W1216 05:06:12.416249   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:12.416249   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:12.416249   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:12.457279   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:12.457279   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:12.504035   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:12.504035   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:12.565073   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:12.565073   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:12.594834   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:12.594834   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:12.671197   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:12.662068   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.663058   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.664278   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.666376   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.667861   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:12.662068   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.663058   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.664278   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.666376   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:12.667861   31087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:15.176441   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:15.198949   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:15.228375   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.228375   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:15.232284   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:15.260859   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.260859   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:15.264596   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:15.289482   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.289482   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:15.293332   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:15.321841   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.321889   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:15.325366   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:15.355205   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.355205   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:15.359602   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:15.391155   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.391155   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:15.395288   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:15.422696   13524 logs.go:282] 0 containers: []
	W1216 05:06:15.422696   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:15.422696   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:15.422696   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:15.509885   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:15.501731   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.502732   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.503898   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.505461   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.506268   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:15.501731   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.502732   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.503898   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.505461   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:15.506268   31206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:15.509885   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:15.509885   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:15.550722   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:15.550722   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:15.597215   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:15.598218   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:15.655170   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:15.655170   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:18.189600   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:18.214190   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:18.244833   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.244918   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:18.248323   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:18.274826   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.274826   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:18.278263   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:18.305755   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.305755   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:18.310038   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:18.339762   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.339762   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:18.343253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:18.372235   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.372235   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:18.376253   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:18.405785   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.405785   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:18.410335   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:18.436279   13524 logs.go:282] 0 containers: []
	W1216 05:06:18.436279   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:18.436279   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:18.436279   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:18.477830   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:18.477830   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:18.533284   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:18.533302   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:18.592952   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:18.592952   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:18.623173   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:18.623173   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:18.706158   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:18.695935   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.696872   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.699160   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.700510   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.701521   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:18.695935   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.696872   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.699160   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.700510   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:18.701521   31391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:21.211431   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:21.233375   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:21.263996   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.263996   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:21.267857   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:21.296614   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.296614   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:21.300408   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:21.327435   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.327435   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:21.331241   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:21.361684   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.361684   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:21.365531   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:21.393896   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.393896   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:21.397371   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:21.427885   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.427885   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:21.431500   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:21.459772   13524 logs.go:282] 0 containers: []
	W1216 05:06:21.459772   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:21.459772   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:21.459772   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:21.522041   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:21.522041   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:21.550901   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:21.550901   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:21.638725   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:21.627307   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.628343   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.629090   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.631635   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.632400   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:21.627307   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.628343   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.629090   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.631635   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:21.632400   31526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:21.638725   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:21.638725   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:21.680001   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:21.680001   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:24.235731   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:24.258332   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:24.285838   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.285838   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:24.289583   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:24.320077   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.320077   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:24.323958   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:24.351529   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.351529   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:24.355109   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:24.382170   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.382170   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:24.385526   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:24.415016   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.415016   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:24.418742   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:24.446275   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.446275   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:24.449841   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:24.475953   13524 logs.go:282] 0 containers: []
	W1216 05:06:24.475953   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:24.475953   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:24.475953   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:24.537960   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:24.537960   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:24.566319   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:24.566319   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:24.648912   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:24.639127   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.640216   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.641694   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.642989   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.643980   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:24.639127   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.640216   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.641694   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.642989   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:24.643980   31677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:24.648912   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:24.648912   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:24.689261   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:24.689261   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:27.244212   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:27.265843   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:27.291130   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.291130   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:27.295137   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:27.321255   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.321255   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:27.324759   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:27.355906   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.355906   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:27.359611   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:27.386761   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.386761   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:27.390275   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:27.419553   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.419586   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:27.423093   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:27.451634   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.451634   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:27.455077   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:27.485799   13524 logs.go:282] 0 containers: []
	W1216 05:06:27.485799   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:27.485799   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:27.485799   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:27.547830   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:27.547830   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:27.576915   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:27.576915   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:27.661056   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:27.651493   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.652444   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.653497   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.654928   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.656287   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:27.651493   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.652444   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.653497   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.654928   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:27.656287   31830 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:27.661056   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:27.661056   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:27.700831   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:27.700831   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:30.249035   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:30.271093   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 05:06:30.299108   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.299188   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:06:30.302446   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 05:06:30.332396   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.332482   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:06:30.338127   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 05:06:30.366185   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.366185   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:06:30.369711   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 05:06:30.400279   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.400279   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:06:30.404337   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 05:06:30.432897   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.432897   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:06:30.437025   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 05:06:30.465969   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.465969   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:06:30.470356   13524 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 05:06:30.499169   13524 logs.go:282] 0 containers: []
	W1216 05:06:30.499169   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:06:30.499169   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:06:30.499169   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:06:30.557232   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:06:30.557232   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:06:30.584956   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:06:30.584956   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:06:30.671890   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:06:30.661473   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.662403   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.665095   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.667106   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.668354   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:06:30.661473   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.662403   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.665095   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.667106   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:06:30.668354   31982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:06:30.671890   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:06:30.671890   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:06:30.714351   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:06:30.714351   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:06:33.262234   13524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:06:33.280780   13524 kubeadm.go:602] duration metric: took 4m2.2739333s to restartPrimaryControlPlane
	W1216 05:06:33.280780   13524 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 05:06:33.285614   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 05:06:33.738970   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:06:33.760826   13524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:06:33.774044   13524 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:06:33.778124   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:06:33.790578   13524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:06:33.790578   13524 kubeadm.go:158] found existing configuration files:
	
	I1216 05:06:33.794570   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:06:33.806138   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:06:33.810590   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:06:33.828749   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:06:33.841712   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:06:33.846141   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:06:33.862218   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:06:33.872779   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:06:33.877830   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:06:33.893064   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:06:33.905212   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:06:33.909089   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:06:33.925766   13524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:06:34.031218   13524 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 05:06:34.116656   13524 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 05:06:34.211658   13524 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:10:35.264797   13524 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 05:10:35.264797   13524 kubeadm.go:319] 
	I1216 05:10:35.264797   13524 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 05:10:35.269807   13524 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:10:35.269807   13524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:10:35.269807   13524 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:10:35.269807   13524 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 05:10:35.269807   13524 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 05:10:35.270949   13524 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 05:10:35.271052   13524 kubeadm.go:319] CONFIG_INET: enabled
	I1216 05:10:35.271576   13524 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 05:10:35.271702   13524 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 05:10:35.272413   13524 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 05:10:35.272605   13524 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 05:10:35.272632   13524 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 05:10:35.273278   13524 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 05:10:35.273322   13524 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 05:10:35.273414   13524 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 05:10:35.273503   13524 kubeadm.go:319] OS: Linux
	I1216 05:10:35.273681   13524 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:10:35.273728   13524 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 05:10:35.273769   13524 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:10:35.273813   13524 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:10:35.273855   13524 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:10:35.273913   13524 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 05:10:35.274000   13524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:10:35.274000   13524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:10:35.274584   13524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:10:35.274584   13524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:10:35.293047   13524 out.go:252]   - Generating certificates and keys ...
	I1216 05:10:35.293426   13524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:10:35.293599   13524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:10:35.293913   13524 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 05:10:35.294149   13524 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 05:10:35.294277   13524 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 05:10:35.294885   13524 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 05:10:35.294982   13524 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 05:10:35.295109   13524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:10:35.295195   13524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:10:35.295363   13524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:10:35.295447   13524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:10:35.295612   13524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:10:35.295735   13524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:10:35.295944   13524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:10:35.296070   13524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:10:35.299081   13524 out.go:252]   - Booting up control plane ...
	I1216 05:10:35.299081   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:10:35.299715   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:10:35.300333   13524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:10:35.300333   13524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:10:35.300908   13524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:10:35.300908   13524 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000864945s
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	I1216 05:10:35.300908   13524 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- The kubelet is not running
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	I1216 05:10:35.300908   13524 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 05:10:35.300908   13524 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 05:10:35.300908   13524 kubeadm.go:319] 
	W1216 05:10:35.301920   13524 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000864945s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 05:10:35.307024   13524 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 05:10:35.771515   13524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:10:35.789507   13524 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 05:10:35.793192   13524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:10:35.806790   13524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:10:35.806790   13524 kubeadm.go:158] found existing configuration files:
	
	I1216 05:10:35.811076   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1216 05:10:35.824674   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:10:35.830540   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:10:35.849846   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1216 05:10:35.864835   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:10:35.868716   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:10:35.884647   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1216 05:10:35.897559   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:10:35.901847   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:10:35.919926   13524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1216 05:10:35.932321   13524 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:10:35.937201   13524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:10:35.958683   13524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 05:10:36.010883   13524 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 05:10:36.010883   13524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 05:10:36.157778   13524 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 05:10:36.157778   13524 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 05:10:36.157778   13524 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 05:10:36.158306   13524 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 05:10:36.158377   13524 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 05:10:36.158462   13524 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 05:10:36.158630   13524 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 05:10:36.158749   13524 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 05:10:36.158829   13524 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 05:10:36.158950   13524 kubeadm.go:319] CONFIG_INET: enabled
	I1216 05:10:36.159106   13524 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 05:10:36.159146   13524 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 05:10:36.159725   13524 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 05:10:36.159807   13524 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 05:10:36.159927   13524 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 05:10:36.160002   13524 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 05:10:36.160137   13524 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 05:10:36.160246   13524 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 05:10:36.160368   13524 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 05:10:36.160629   13524 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] OS: Linux
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 05:10:36.160656   13524 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 05:10:36.160977   13524 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 05:10:36.160977   13524 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 05:10:36.161060   13524 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 05:10:36.161119   13524 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 05:10:36.161119   13524 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 05:10:36.161172   13524 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 05:10:36.263883   13524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 05:10:36.264641   13524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 05:10:36.264641   13524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 05:10:36.285337   13524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 05:10:36.291241   13524 out.go:252]   - Generating certificates and keys ...
	I1216 05:10:36.291368   13524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 05:10:36.291473   13524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 05:10:36.291610   13524 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 05:10:36.291701   13524 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 05:10:36.292292   13524 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 05:10:36.292479   13524 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 05:10:36.292479   13524 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 05:10:36.292479   13524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 05:10:36.355551   13524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 05:10:36.426990   13524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 05:10:36.485556   13524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 05:10:36.680670   13524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 05:10:36.834763   13524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 05:10:36.835291   13524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 05:10:36.840606   13524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 05:10:36.844374   13524 out.go:252]   - Booting up control plane ...
	I1216 05:10:36.844573   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 05:10:36.844629   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 05:10:36.844629   13524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 05:10:36.865895   13524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 05:10:36.865895   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 05:10:36.874270   13524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 05:10:37.021660   13524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 05:10:37.022023   13524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 05:14:36.995901   13524 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000744142s
	I1216 05:14:36.995988   13524 kubeadm.go:319] 
	I1216 05:14:36.996138   13524 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 05:14:36.996214   13524 kubeadm.go:319] 	- The kubelet is not running
	I1216 05:14:36.996375   13524 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 05:14:36.996375   13524 kubeadm.go:319] 
	I1216 05:14:36.996441   13524 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 05:14:36.996441   13524 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 05:14:36.996441   13524 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 05:14:36.996441   13524 kubeadm.go:319] 
	I1216 05:14:37.001376   13524 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 05:14:37.002575   13524 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 05:14:37.002650   13524 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 05:14:37.002650   13524 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 05:14:37.002650   13524 kubeadm.go:319] 
	I1216 05:14:37.003329   13524 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 05:14:37.003329   13524 kubeadm.go:403] duration metric: took 12m6.0383556s to StartCluster
	I1216 05:14:37.003329   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:14:37.007935   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:14:37.064773   13524 cri.go:89] found id: ""
	I1216 05:14:37.064773   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.064773   13524 logs.go:284] No container was found matching "kube-apiserver"
	I1216 05:14:37.064773   13524 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:14:37.069487   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:14:37.111914   13524 cri.go:89] found id: ""
	I1216 05:14:37.111914   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.111914   13524 logs.go:284] No container was found matching "etcd"
	I1216 05:14:37.111914   13524 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:14:37.116663   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:14:37.152644   13524 cri.go:89] found id: ""
	I1216 05:14:37.152667   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.152667   13524 logs.go:284] No container was found matching "coredns"
	I1216 05:14:37.152667   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:14:37.157010   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:14:37.200196   13524 cri.go:89] found id: ""
	I1216 05:14:37.200196   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.200196   13524 logs.go:284] No container was found matching "kube-scheduler"
	I1216 05:14:37.200268   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:14:37.204321   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:14:37.243623   13524 cri.go:89] found id: ""
	I1216 05:14:37.243623   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.243623   13524 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:14:37.243623   13524 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:14:37.248366   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:14:37.289277   13524 cri.go:89] found id: ""
	I1216 05:14:37.289277   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.289277   13524 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 05:14:37.289277   13524 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:14:37.294034   13524 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:14:37.333593   13524 cri.go:89] found id: ""
	I1216 05:14:37.333593   13524 logs.go:282] 0 containers: []
	W1216 05:14:37.333593   13524 logs.go:284] No container was found matching "kindnet"
	I1216 05:14:37.333593   13524 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:14:37.333593   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:14:37.417323   13524 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:14:37.408448   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.410123   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.411118   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.412628   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.413931   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 05:14:37.408448   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.410123   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.411118   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.412628   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:14:37.413931   40064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:14:37.417323   13524 logs.go:123] Gathering logs for Docker ...
	I1216 05:14:37.417323   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 05:14:37.457412   13524 logs.go:123] Gathering logs for container status ...
	I1216 05:14:37.457412   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:14:37.504416   13524 logs.go:123] Gathering logs for kubelet ...
	I1216 05:14:37.504416   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:14:37.564994   13524 logs.go:123] Gathering logs for dmesg ...
	I1216 05:14:37.564994   13524 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 05:14:37.597706   13524 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 05:14:37.597706   13524 out.go:285] * 
	W1216 05:14:37.597706   13524 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:14:37.597706   13524 out.go:285] * 
	W1216 05:14:37.600079   13524 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 05:14:37.606140   13524 out.go:203] 
	W1216 05:14:37.609999   13524 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000744142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 05:14:37.610044   13524 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 05:14:37.610044   13524 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 05:14:37.613011   13524 out.go:203] 
	
	
	==> Docker <==
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685355275Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685360576Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685379878Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.685414282Z" level=info msg="Initializing buildkit"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.786375434Z" level=info msg="Completed buildkit initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.794992212Z" level=info msg="Daemon has completed initialization"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151030Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795151330Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 05:02:27 functional-002200 dockerd[20947]: time="2025-12-16T05:02:27.795240140Z" level=info msg="API listen on [::]:2376"
	Dec 16 05:02:27 functional-002200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:27 functional-002200 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 16 05:02:27 functional-002200 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 16 05:02:28 functional-002200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Loaded network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 05:02:28 functional-002200 cri-dockerd[21269]: time="2025-12-16T05:02:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 05:02:28 functional-002200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 05:16:56.490407   43838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:16:56.492093   43838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:16:56.494844   43838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:16:56.496230   43838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1216 05:16:56.497404   43838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000761] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000760] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000766] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000779] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000780] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 05:02] CPU: 4 PID: 64252 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001010] RIP: 0033:0x7fd1009fcb20
	[  +0.000612] Code: Unable to access opcode bytes at RIP 0x7fd1009fcaf6.
	[  +0.000841] RSP: 002b:00007ffd77dbf430 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000856] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000836] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000800] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000980] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000812] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000777] FS:  0000000000000000 GS:  0000000000000000
	[  +0.813920] CPU: 4 PID: 64374 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000818] RIP: 0033:0x7f21a7b6eb20
	[  +0.000430] Code: Unable to access opcode bytes at RIP 0x7f21a7b6eaf6.
	[  +0.000678] RSP: 002b:00007ffd5fd33ec0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000807] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000827] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000787] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000791] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000788] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:16:56 up 53 min,  0 user,  load average: 0.37, 0.33, 0.41
	Linux functional-002200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 05:16:52 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:16:53 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 502.
	Dec 16 05:16:53 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:53 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:53 functional-002200 kubelet[43665]: E1216 05:16:53.739346   43665 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:16:53 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:16:53 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:16:54 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 503.
	Dec 16 05:16:54 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:54 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:54 functional-002200 kubelet[43679]: E1216 05:16:54.507687   43679 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:16:54 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:16:54 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:16:55 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 504.
	Dec 16 05:16:55 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:55 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:55 functional-002200 kubelet[43698]: E1216 05:16:55.253499   43698 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:16:55 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:16:55 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 05:16:55 functional-002200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 505.
	Dec 16 05:16:55 functional-002200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:55 functional-002200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 05:16:56 functional-002200 kubelet[43720]: E1216 05:16:56.009168   43720 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 05:16:56 functional-002200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 05:16:56 functional-002200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-002200 -n functional-002200: exit status 2 (572.6517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-002200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (53.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell (3.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-002200 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-002200"
functional_test.go:514: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-002200 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-002200": exit status 1 (3.1006251s)

                                                
                                                
-- stdout --
	functional-002200
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	docker-env: in-use
	

                                                
                                                
-- /stdout --
functional_test.go:520: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell (3.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-002200 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-002200 create deployment hello-node --image kicbase/echo-server: exit status 1 (94.9163ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:49316/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-002200 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 service list: exit status 103 (495.3438ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-002200 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-002200"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-002200 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-002200 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-002200\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 service list -o json: exit status 103 (475.1337ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-002200 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-002200"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-002200 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 service --namespace=default --https --url hello-node: exit status 103 (509.9298ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-002200 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-002200"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-002200 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 service hello-node --url --format={{.IP}}: exit status 103 (521.3514ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-002200 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-002200"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-002200 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-002200 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-002200\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 service hello-node --url: exit status 103 (497.2293ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-002200 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-002200"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-002200 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-002200 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-002200"
functional_test.go:1579: failed to parse "* The control-plane node functional-002200 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-002200\"": parse "* The control-plane node functional-002200 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-002200\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-002200 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-002200 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1216 05:16:17.815529    1452 out.go:360] Setting OutFile to fd 1284 ...
I1216 05:16:17.881150    1452 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:16:17.881150    1452 out.go:374] Setting ErrFile to fd 1184...
I1216 05:16:17.881191    1452 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:16:17.894367    1452 mustload.go:66] Loading cluster: functional-002200
I1216 05:16:17.895043    1452 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1216 05:16:17.901351    1452 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
I1216 05:16:17.963798    1452 host.go:66] Checking if "functional-002200" exists ...
I1216 05:16:17.968514    1452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-002200
I1216 05:16:18.016800    1452 api_server.go:166] Checking apiserver status ...
I1216 05:16:18.019787    1452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1216 05:16:18.023677    1452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
I1216 05:16:18.081600    1452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
W1216 05:16:18.210179    1452 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1216 05:16:18.213538    1452 out.go:179] * The control-plane node functional-002200 apiserver is not running: (state=Stopped)
I1216 05:16:18.215777    1452 out.go:179]   To start a cluster, run: "minikube start -p functional-002200"

                                                
                                                
stdout: * The control-plane node functional-002200 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-002200"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-002200 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-002200 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-002200 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-002200 tunnel --alsologtostderr] ...
helpers_test.go:520: unable to terminate pid 7976: Access is denied.
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-002200 tunnel --alsologtostderr] stdout:
* The control-plane node functional-002200 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-002200"
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-002200 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (20.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-002200 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-002200 apply -f testdata\testsvc.yaml: exit status 1 (20.1729927s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:49316/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-002200 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (20.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (833.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-633300 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-633300 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker: (59.1969386s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-633300
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-633300: (3.2136875s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-633300 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-633300 status --format={{.Host}}: exit status 7 (205.5515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-633300 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-633300 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker: exit status 109 (12m33.6733673s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-633300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-633300" primary control-plane node in "kubernetes-upgrade-633300" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:03:31.450984   11368 out.go:360] Setting OutFile to fd 1816 ...
	I1216 06:03:31.494563   11368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:03:31.494563   11368 out.go:374] Setting ErrFile to fd 1536...
	I1216 06:03:31.494563   11368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:03:31.507582   11368 out.go:368] Setting JSON to false
	I1216 06:03:31.510570   11368 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6033,"bootTime":1765858978,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:03:31.510570   11368 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:03:31.515784   11368 out.go:179] * [kubernetes-upgrade-633300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:03:31.518754   11368 notify.go:221] Checking for updates...
	I1216 06:03:31.522075   11368 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:03:31.525064   11368 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:03:31.532147   11368 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:03:31.538805   11368 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:03:31.543393   11368 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:03:31.546596   11368 config.go:182] Loaded profile config "kubernetes-upgrade-633300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1216 06:03:31.547387   11368 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:03:31.657453   11368 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:03:31.661453   11368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:03:31.898053   11368 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:03:31.880028106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:03:31.902054   11368 out.go:179] * Using the docker driver based on existing profile
	I1216 06:03:31.904054   11368 start.go:309] selected driver: docker
	I1216 06:03:31.904054   11368 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-633300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-633300 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:03:31.904054   11368 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:03:31.955690   11368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:03:32.194410   11368 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:03:32.17645371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:03:32.195029   11368 cni.go:84] Creating CNI manager for ""
	I1216 06:03:32.195029   11368 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:03:32.195029   11368 start.go:353] cluster config:
	{Name:kubernetes-upgrade-633300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-633300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:03:32.197639   11368 out.go:179] * Starting "kubernetes-upgrade-633300" primary control-plane node in "kubernetes-upgrade-633300" cluster
	I1216 06:03:32.200534   11368 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:03:32.204707   11368 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:03:32.208365   11368 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:03:32.208365   11368 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:03:32.208365   11368 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 06:03:32.208365   11368 cache.go:65] Caching tarball of preloaded images
	I1216 06:03:32.209034   11368 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:03:32.209034   11368 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 06:03:32.209741   11368 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-633300\config.json ...
	I1216 06:03:32.278648   11368 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:03:32.278648   11368 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:03:32.278648   11368 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:03:32.278648   11368 start.go:360] acquireMachinesLock for kubernetes-upgrade-633300: {Name:mka2ca3b6ae57f943547d9aed044de774678d2a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:03:32.278648   11368 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubernetes-upgrade-633300"
	I1216 06:03:32.278648   11368 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:03:32.278648   11368 fix.go:54] fixHost starting: 
	I1216 06:03:32.285633   11368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-633300 --format={{.State.Status}}
	I1216 06:03:32.334637   11368 fix.go:112] recreateIfNeeded on kubernetes-upgrade-633300: state=Stopped err=<nil>
	W1216 06:03:32.334637   11368 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:03:32.337644   11368 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-633300" ...
	I1216 06:03:32.340639   11368 cli_runner.go:164] Run: docker start kubernetes-upgrade-633300
	I1216 06:03:32.876916   11368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-633300 --format={{.State.Status}}
	I1216 06:03:32.932486   11368 kic.go:430] container "kubernetes-upgrade-633300" state is running.
	I1216 06:03:32.937485   11368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-633300
	I1216 06:03:32.994500   11368 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-633300\config.json ...
	I1216 06:03:32.996486   11368 machine.go:94] provisionDockerMachine start ...
	I1216 06:03:33.000489   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:33.062630   11368 main.go:143] libmachine: Using SSH client type: native
	I1216 06:03:33.063642   11368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54078 <nil> <nil>}
	I1216 06:03:33.063642   11368 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:03:33.064636   11368 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 06:03:36.246140   11368 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-633300
	
	I1216 06:03:36.246140   11368 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-633300"
	I1216 06:03:36.250139   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:36.304128   11368 main.go:143] libmachine: Using SSH client type: native
	I1216 06:03:36.304128   11368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54078 <nil> <nil>}
	I1216 06:03:36.304128   11368 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-633300 && echo "kubernetes-upgrade-633300" | sudo tee /etc/hostname
	I1216 06:03:36.480062   11368 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-633300
	
	I1216 06:03:36.483130   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:36.540958   11368 main.go:143] libmachine: Using SSH client type: native
	I1216 06:03:36.540958   11368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54078 <nil> <nil>}
	I1216 06:03:36.540958   11368 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-633300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-633300/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-633300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:03:36.698476   11368 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:03:36.698476   11368 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:03:36.698587   11368 ubuntu.go:190] setting up certificates
	I1216 06:03:36.698587   11368 provision.go:84] configureAuth start
	I1216 06:03:36.702447   11368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-633300
	I1216 06:03:36.763455   11368 provision.go:143] copyHostCerts
	I1216 06:03:36.763455   11368 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:03:36.763455   11368 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:03:36.764134   11368 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:03:36.765740   11368 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:03:36.765789   11368 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:03:36.766174   11368 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:03:36.767148   11368 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:03:36.767187   11368 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:03:36.767374   11368 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:03:36.768302   11368 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-633300 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-633300 localhost minikube]
	I1216 06:03:36.933648   11368 provision.go:177] copyRemoteCerts
	I1216 06:03:36.939955   11368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:03:36.945334   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:37.001330   11368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54078 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-633300\id_rsa Username:docker}
	I1216 06:03:37.131920   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:03:37.162343   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1216 06:03:37.191291   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 06:03:37.217541   11368 provision.go:87] duration metric: took 518.9473ms to configureAuth
	I1216 06:03:37.217591   11368 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:03:37.218097   11368 config.go:182] Loaded profile config "kubernetes-upgrade-633300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:03:37.222005   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:37.287826   11368 main.go:143] libmachine: Using SSH client type: native
	I1216 06:03:37.287826   11368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54078 <nil> <nil>}
	I1216 06:03:37.288819   11368 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:03:37.449444   11368 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:03:37.449444   11368 ubuntu.go:71] root file system type: overlay
	I1216 06:03:37.449444   11368 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:03:37.456616   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:37.515316   11368 main.go:143] libmachine: Using SSH client type: native
	I1216 06:03:37.516313   11368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54078 <nil> <nil>}
	I1216 06:03:37.516313   11368 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:03:37.703278   11368 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:03:37.711000   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:37.768845   11368 main.go:143] libmachine: Using SSH client type: native
	I1216 06:03:37.768845   11368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54078 <nil> <nil>}
	I1216 06:03:37.768845   11368 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:03:37.955328   11368 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:03:37.955385   11368 machine.go:97] duration metric: took 4.9587777s to provisionDockerMachine
	I1216 06:03:37.955385   11368 start.go:293] postStartSetup for "kubernetes-upgrade-633300" (driver="docker")
	I1216 06:03:37.955385   11368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:03:37.959795   11368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:03:37.964126   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:38.020810   11368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54078 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-633300\id_rsa Username:docker}
	I1216 06:03:38.149266   11368 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:03:38.157840   11368 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:03:38.157840   11368 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:03:38.157840   11368 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:03:38.157840   11368 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:03:38.158851   11368 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:03:38.163604   11368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:03:38.176774   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:03:38.206099   11368 start.go:296] duration metric: took 250.7113ms for postStartSetup
	I1216 06:03:38.210492   11368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:03:38.213660   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:38.270061   11368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54078 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-633300\id_rsa Username:docker}
	I1216 06:03:38.403268   11368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:03:38.413964   11368 fix.go:56] duration metric: took 6.1352036s for fixHost
	I1216 06:03:38.413994   11368 start.go:83] releasing machines lock for "kubernetes-upgrade-633300", held for 6.1352666s
	I1216 06:03:38.417585   11368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-633300
	I1216 06:03:38.476393   11368 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:03:38.481074   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:38.482632   11368 ssh_runner.go:195] Run: cat /version.json
	I1216 06:03:38.485685   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:38.535923   11368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54078 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-633300\id_rsa Username:docker}
	I1216 06:03:38.537847   11368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54078 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-633300\id_rsa Username:docker}
	W1216 06:03:38.646427   11368 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:03:38.661008   11368 ssh_runner.go:195] Run: systemctl --version
	I1216 06:03:38.675030   11368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:03:38.684171   11368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:03:38.689916   11368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:03:38.705024   11368 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:03:38.705024   11368 start.go:496] detecting cgroup driver to use...
	I1216 06:03:38.705024   11368 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:03:38.705024   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:03:38.732483   11368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 06:03:38.748678   11368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1216 06:03:38.753424   11368 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:03:38.753424   11368 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:03:38.765992   11368 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:03:38.771548   11368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:03:38.791219   11368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:03:38.812203   11368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:03:38.838455   11368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:03:38.854385   11368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:03:38.870391   11368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:03:38.886382   11368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:03:38.903218   11368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:03:38.920385   11368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:03:38.938830   11368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:03:38.958441   11368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:03:39.113036   11368 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:03:39.280037   11368 start.go:496] detecting cgroup driver to use...
	I1216 06:03:39.280037   11368 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:03:39.284128   11368 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:03:39.307402   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:03:39.327573   11368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:03:39.400865   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:03:39.422161   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:03:39.444896   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:03:39.476592   11368 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:03:39.490336   11368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:03:39.500960   11368 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 06:03:39.524579   11368 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:03:39.662124   11368 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:03:39.812932   11368 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:03:39.812932   11368 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:03:39.837695   11368 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:03:39.860890   11368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:03:40.010266   11368 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:03:41.014151   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:03:41.042613   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:03:41.068617   11368 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 06:03:41.093613   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:03:41.114518   11368 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:03:41.283590   11368 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:03:41.434653   11368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:03:41.548601   11368 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:03:41.575964   11368 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:03:41.598229   11368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:03:41.732106   11368 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:03:41.860168   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:03:41.879563   11368 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:03:41.883459   11368 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:03:41.890020   11368 start.go:564] Will wait 60s for crictl version
	I1216 06:03:41.895275   11368 ssh_runner.go:195] Run: which crictl
	I1216 06:03:41.905847   11368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:03:41.952069   11368 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:03:41.955280   11368 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:03:41.997067   11368 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:03:42.039161   11368 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 06:03:42.043566   11368 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-633300 dig +short host.docker.internal
	I1216 06:03:42.185695   11368 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:03:42.190151   11368 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:03:42.197524   11368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:03:42.218565   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:42.277056   11368 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-633300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-633300 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:03:42.277056   11368 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:03:42.281107   11368 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:03:42.312497   11368 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:03:42.312497   11368 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1216 06:03:42.316484   11368 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 06:03:42.332503   11368 ssh_runner.go:195] Run: which lz4
	I1216 06:03:42.349070   11368 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 06:03:42.357065   11368 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 06:03:42.357065   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284622240 bytes)
	I1216 06:03:45.303339   11368 docker.go:655] duration metric: took 2.9582235s to copy over tarball
	I1216 06:03:45.307339   11368 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 06:03:47.590022   11368 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.2826542s)
	I1216 06:03:47.590022   11368 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 06:03:47.646197   11368 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1216 06:03:47.659972   11368 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2660 bytes)
	I1216 06:03:47.686866   11368 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:03:47.710443   11368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:03:47.864654   11368 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:03:55.147333   11368 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.2825846s)
	I1216 06:03:55.151482   11368 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:03:55.182380   11368 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:03:55.182429   11368 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:03:55.182429   11368 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 docker true true} ...
	I1216 06:03:55.182429   11368 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-633300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-633300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:03:55.186813   11368 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:03:55.263198   11368 cni.go:84] Creating CNI manager for ""
	I1216 06:03:55.263198   11368 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:03:55.263198   11368 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:03:55.263198   11368 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-633300 NodeName:kubernetes-upgrade-633300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:03:55.263198   11368 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-633300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:03:55.268187   11368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:03:55.280004   11368 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:03:55.283998   11368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:03:55.294998   11368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I1216 06:03:55.314245   11368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:03:55.335803   11368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1216 06:03:55.356809   11368 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:03:55.363807   11368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:03:55.382250   11368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:03:55.524214   11368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:03:55.545185   11368 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-633300 for IP: 192.168.85.2
	I1216 06:03:55.545185   11368 certs.go:195] generating shared ca certs ...
	I1216 06:03:55.545185   11368 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:03:55.545810   11368 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:03:55.546272   11368 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:03:55.546410   11368 certs.go:257] generating profile certs ...
	I1216 06:03:55.547006   11368 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-633300\client.key
	I1216 06:03:55.547006   11368 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-633300\apiserver.key.1ea9618b
	I1216 06:03:55.547794   11368 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-633300\proxy-client.key
	I1216 06:03:55.549097   11368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:03:55.549297   11368 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:03:55.549297   11368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:03:55.549297   11368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:03:55.549975   11368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:03:55.549975   11368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:03:55.550653   11368 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:03:55.552039   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:03:55.582117   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:03:55.606568   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:03:55.633758   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:03:55.659623   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-633300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1216 06:03:55.687478   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-633300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:03:55.716664   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-633300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:03:55.746716   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-633300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:03:55.772981   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:03:55.800956   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:03:55.827830   11368 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:03:55.852061   11368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:03:55.874064   11368 ssh_runner.go:195] Run: openssl version
	I1216 06:03:55.889105   11368 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:03:55.903800   11368 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:03:55.920919   11368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:03:55.927934   11368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:03:55.931927   11368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:03:55.980350   11368 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:03:55.996014   11368 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:03:56.010012   11368 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:03:56.025014   11368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:03:56.032026   11368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:03:56.036014   11368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:03:56.087069   11368 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:03:56.104474   11368 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:03:56.120433   11368 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:03:56.139248   11368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:03:56.148391   11368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:03:56.152009   11368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:03:56.199745   11368 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:03:56.217403   11368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:03:56.237218   11368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:03:56.300133   11368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:03:56.355496   11368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:03:56.420500   11368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:03:56.488510   11368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:03:56.546339   11368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:03:56.591898   11368 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-633300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-633300 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:03:56.597644   11368 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:03:56.633490   11368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:03:56.647957   11368 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:03:56.647957   11368 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:03:56.653247   11368 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:03:56.665523   11368 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:03:56.669063   11368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-633300
	I1216 06:03:56.720322   11368 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-633300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:03:56.721322   11368 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-633300" cluster setting kubeconfig missing "kubernetes-upgrade-633300" context setting]
	I1216 06:03:56.721322   11368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:03:56.747332   11368 kapi.go:59] client config for kubernetes-upgrade-633300: &rest.Config{Host:"https://127.0.0.1:54082", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-633300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-633300/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyD
ata:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff78e429080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 06:03:56.748325   11368 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 06:03:56.748325   11368 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 06:03:56.748325   11368 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 06:03:56.748325   11368 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 06:03:56.748325   11368 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 06:03:56.754334   11368 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:03:56.769646   11368 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-16 06:03:01.663991958 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-16 06:03:55.345626925 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "kubernetes-upgrade-633300"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1216 06:03:56.769646   11368 kubeadm.go:1161] stopping kube-system containers ...
	I1216 06:03:56.772649   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:03:56.801938   11368 docker.go:484] Stopping containers: [8466efe0438e 06ae27c05587 cf492f615c62 b5eb8faf39e0 faec818f5a15 2f7a91469129 433a8b8fad71 de2ae2c5d3a0]
	I1216 06:03:56.805451   11368 ssh_runner.go:195] Run: docker stop 8466efe0438e 06ae27c05587 cf492f615c62 b5eb8faf39e0 faec818f5a15 2f7a91469129 433a8b8fad71 de2ae2c5d3a0
	I1216 06:03:56.854344   11368 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 06:03:56.956529   11368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:03:56.968918   11368 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 16 06:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 16 06:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 16 06:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 16 06:03 /etc/kubernetes/scheduler.conf
	
	I1216 06:03:56.973018   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:03:56.990518   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:03:57.009910   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:03:57.023046   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:03:57.031281   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:03:57.053420   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:03:57.067856   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:03:57.071306   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:03:57.086847   11368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:03:57.104836   11368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:03:57.182334   11368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:03:57.837515   11368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:03:58.110511   11368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:03:58.191195   11368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 06:03:58.259723   11368 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:03:58.264238   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:03:58.764558   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:03:59.263146   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:03:59.765314   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:00.265895   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:00.764794   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:01.265663   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:01.765582   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:02.264755   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:02.763913   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:03.266823   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:03.766850   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:04.264381   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:04.764262   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:05.264816   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:05.764262   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:06.267020   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:06.764924   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:07.264139   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:07.765049   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:08.265576   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:08.764742   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:09.266111   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:09.767142   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:10.265172   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:10.765100   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:11.265657   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:11.764395   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:12.265762   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:12.764611   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:13.267193   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:13.766013   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:14.263273   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:14.766886   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:15.265996   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:15.766082   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:16.265420   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:16.772497   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:17.265765   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:17.764001   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:18.265045   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:18.764779   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:19.263998   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:19.764197   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:20.265926   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:20.765983   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:21.266750   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:21.764553   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:22.265662   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:22.764383   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:23.265844   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:23.764641   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:24.263859   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:24.764394   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:25.265446   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:25.764732   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:26.265863   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:26.766148   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:27.264991   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:27.764495   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:28.267051   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:28.766077   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:29.265539   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:29.764861   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:30.267478   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:30.765637   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:31.267057   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:31.765689   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:32.265753   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:32.765105   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:33.266274   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:33.765264   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:34.265386   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:34.766776   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:35.265755   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:35.766690   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:36.265384   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:36.766824   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:37.266623   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:37.764673   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:38.264582   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:38.766822   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:39.266947   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:39.764137   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:40.265728   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:40.767621   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:41.265321   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:41.764744   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:42.265226   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:42.765372   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:43.267786   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:43.764130   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:44.266352   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:44.766128   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:45.265800   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:45.763565   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:46.264702   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:46.765979   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:47.267009   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:47.767671   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:48.264795   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:48.764657   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:49.265715   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:49.765020   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:50.265458   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:50.766992   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:51.265502   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:51.767409   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:52.264599   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:52.768479   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:53.268894   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:53.788330   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:54.269406   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:54.771481   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:55.268284   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:55.767400   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:56.268543   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:56.764750   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:57.267201   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:57.765623   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:04:58.265050   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:04:58.321708   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:04:58.325771   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:04:58.361774   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:04:58.367778   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:04:58.404772   11368 logs.go:282] 0 containers: []
	W1216 06:04:58.404772   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:04:58.408771   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:04:58.447773   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:04:58.452779   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:04:58.491768   11368 logs.go:282] 0 containers: []
	W1216 06:04:58.491768   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:04:58.494769   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:04:58.533773   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:04:58.536783   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:04:58.573768   11368 logs.go:282] 0 containers: []
	W1216 06:04:58.573768   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:04:58.576772   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:04:58.607388   11368 logs.go:282] 0 containers: []
	W1216 06:04:58.607388   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:04:58.607388   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:04:58.607388   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:04:58.669385   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:04:58.669385   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:04:58.705012   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:04:58.705012   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:04:58.751435   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:04:58.751435   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:04:58.792432   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:04:58.792432   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:04:58.884455   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:04:58.884455   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:04:58.884455   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:04:58.937231   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:04:58.937231   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:04:58.983489   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:04:58.983489   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:04:59.016790   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:04:59.016790   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:01.599351   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:01.622358   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:01.655077   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:01.658915   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:01.696102   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:01.700414   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:01.729281   11368 logs.go:282] 0 containers: []
	W1216 06:05:01.729281   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:01.732992   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:01.763414   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:01.766406   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:01.799407   11368 logs.go:282] 0 containers: []
	W1216 06:05:01.799407   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:01.802411   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:01.834407   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:01.837411   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:01.867068   11368 logs.go:282] 0 containers: []
	W1216 06:05:01.867068   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:01.870929   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:01.906551   11368 logs.go:282] 0 containers: []
	W1216 06:05:01.906551   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:01.906551   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:01.906551   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:01.945255   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:01.945255   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:01.985291   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:01.985291   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:02.027809   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:02.027809   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:02.062811   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:02.062811   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:02.090815   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:02.090815   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:02.141679   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:02.141679   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:02.208838   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:02.208838   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:02.306544   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:02.306544   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:02.306544   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:04.855051   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:04.878071   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:04.921056   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:04.928058   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:04.972066   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:04.981455   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:05.030795   11368 logs.go:282] 0 containers: []
	W1216 06:05:05.030795   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:05.034798   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:05.076802   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:05.080807   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:05.119812   11368 logs.go:282] 0 containers: []
	W1216 06:05:05.119812   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:05.123797   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:05.159797   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:05.164798   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:05.198833   11368 logs.go:282] 0 containers: []
	W1216 06:05:05.198833   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:05.202801   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:05.234810   11368 logs.go:282] 0 containers: []
	W1216 06:05:05.234810   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:05.234810   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:05.234810   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:05.291798   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:05.291798   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:05.394820   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:05.394820   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:05.394820   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:05.449812   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:05.449812   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:05.499796   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:05.499796   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:05.534810   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:05.534810   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:05.595806   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:05.595806   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:05.677804   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:05.677804   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:05.726397   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:05.726397   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:08.289061   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:08.309769   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:08.348480   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:08.352481   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:08.383480   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:08.387475   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:08.419475   11368 logs.go:282] 0 containers: []
	W1216 06:05:08.419475   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:08.422479   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:08.459524   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:08.463102   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:08.499566   11368 logs.go:282] 0 containers: []
	W1216 06:05:08.499566   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:08.502570   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:08.533183   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:08.537185   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:08.570465   11368 logs.go:282] 0 containers: []
	W1216 06:05:08.570465   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:08.574680   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:08.611454   11368 logs.go:282] 0 containers: []
	W1216 06:05:08.611454   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:08.611454   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:08.611454   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:08.651390   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:08.651390   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:08.741417   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:08.741417   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:08.741417   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:08.789418   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:08.789418   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:08.831427   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:08.831427   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:08.873799   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:08.873799   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:08.903799   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:08.903799   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:08.964791   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:08.964791   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:09.011898   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:09.011898   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:11.588894   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:11.615884   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:11.660887   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:11.664899   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:11.703881   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:11.707888   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:11.738879   11368 logs.go:282] 0 containers: []
	W1216 06:05:11.738879   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:11.741877   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:11.774874   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:11.777873   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:11.810880   11368 logs.go:282] 0 containers: []
	W1216 06:05:11.811879   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:11.814874   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:11.850171   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:11.853166   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:11.884431   11368 logs.go:282] 0 containers: []
	W1216 06:05:11.884431   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:11.888434   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:11.917826   11368 logs.go:282] 0 containers: []
	W1216 06:05:11.917826   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:11.917826   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:11.917826   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:12.006978   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:12.006978   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:12.006978   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:12.055519   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:12.055519   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:12.095522   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:12.096524   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:12.127208   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:12.127208   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:12.165149   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:12.165149   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:12.202156   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:12.203151   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:12.244596   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:12.244596   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:12.297530   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:12.297530   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:14.865546   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:14.889506   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:14.920620   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:14.924639   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:14.958725   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:14.964570   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:14.997631   11368 logs.go:282] 0 containers: []
	W1216 06:05:14.997631   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:15.000884   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:15.039862   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:15.045305   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:15.073290   11368 logs.go:282] 0 containers: []
	W1216 06:05:15.073290   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:15.077436   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:15.112804   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:15.117087   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:15.149462   11368 logs.go:282] 0 containers: []
	W1216 06:05:15.149462   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:15.154386   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:15.188947   11368 logs.go:282] 0 containers: []
	W1216 06:05:15.188998   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:15.189047   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:15.189047   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:15.263011   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:15.263011   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:15.319495   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:15.319495   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:15.372313   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:15.372313   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:15.410433   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:15.410433   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:15.451513   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:15.451513   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:15.538982   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:15.539038   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:15.539038   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:15.589745   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:15.589745   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:15.633535   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:15.633587   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:18.194627   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:18.219252   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:18.255861   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:18.259858   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:18.288854   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:18.292853   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:18.326028   11368 logs.go:282] 0 containers: []
	W1216 06:05:18.326028   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:18.330036   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:18.365888   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:18.369890   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:18.404895   11368 logs.go:282] 0 containers: []
	W1216 06:05:18.404895   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:18.408886   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:18.438883   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:18.442892   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:18.475914   11368 logs.go:282] 0 containers: []
	W1216 06:05:18.475914   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:18.478932   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:18.509908   11368 logs.go:282] 0 containers: []
	W1216 06:05:18.509908   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:18.509908   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:18.509908   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:18.544681   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:18.544681   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:18.626312   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:18.626312   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:18.626312   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:18.672926   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:18.672926   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:18.715960   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:18.715960   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:18.759908   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:18.759908   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:18.798535   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:18.798535   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:18.829618   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:18.829618   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:18.875130   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:18.875216   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:21.441821   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:21.469321   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:21.506905   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:21.512058   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:21.542204   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:21.545199   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:21.575204   11368 logs.go:282] 0 containers: []
	W1216 06:05:21.575204   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:21.579204   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:21.608210   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:21.611198   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:21.646321   11368 logs.go:282] 0 containers: []
	W1216 06:05:21.646321   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:21.650251   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:21.691282   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:21.695103   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:21.729308   11368 logs.go:282] 0 containers: []
	W1216 06:05:21.729360   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:21.734199   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:21.771006   11368 logs.go:282] 0 containers: []
	W1216 06:05:21.771006   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:21.771006   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:21.771006   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:21.821176   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:21.821176   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:21.850192   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:21.850192   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:21.907330   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:21.907330   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:21.980379   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:21.980379   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:22.020237   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:22.020275   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:22.110039   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:22.110091   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:22.110091   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:22.156029   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:22.156029   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:22.227123   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:22.227123   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:24.775104   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:24.795568   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:24.832358   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:24.836986   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:24.878053   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:24.881887   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:24.910158   11368 logs.go:282] 0 containers: []
	W1216 06:05:24.910158   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:24.913790   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:24.949463   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:24.954648   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:24.996809   11368 logs.go:282] 0 containers: []
	W1216 06:05:24.996809   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:25.001009   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:25.042149   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:25.046108   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:25.080873   11368 logs.go:282] 0 containers: []
	W1216 06:05:25.080921   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:25.084890   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:25.121763   11368 logs.go:282] 0 containers: []
	W1216 06:05:25.121763   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:25.121763   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:25.121763   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:25.161660   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:25.161660   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:25.205853   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:25.205853   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:25.256213   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:25.256213   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:25.299961   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:25.300041   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:25.360664   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:25.360719   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:25.429650   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:25.429650   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:25.526210   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:25.526210   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:25.526210   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:25.577463   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:25.578489   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:28.120642   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:28.142765   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:28.173642   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:28.179730   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:28.217869   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:28.221837   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:28.250965   11368 logs.go:282] 0 containers: []
	W1216 06:05:28.250965   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:28.255863   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:28.284861   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:28.289169   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:28.321441   11368 logs.go:282] 0 containers: []
	W1216 06:05:28.321441   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:28.329554   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:28.359443   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:28.362441   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:28.392218   11368 logs.go:282] 0 containers: []
	W1216 06:05:28.392218   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:28.399436   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:28.433036   11368 logs.go:282] 0 containers: []
	W1216 06:05:28.433036   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:28.433036   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:28.433036   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:28.475723   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:28.475723   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:28.516325   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:28.516325   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:28.596323   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:28.596323   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:28.596323   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:28.650126   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:28.650126   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:28.694115   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:28.694115   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:28.725257   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:28.725257   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:28.776847   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:28.776847   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:28.841211   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:28.841211   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:31.388961   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:31.411722   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:31.444741   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:31.449560   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:31.479729   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:31.483769   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:31.515072   11368 logs.go:282] 0 containers: []
	W1216 06:05:31.515072   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:31.519115   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:31.549439   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:31.553207   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:31.584140   11368 logs.go:282] 0 containers: []
	W1216 06:05:31.584140   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:31.591184   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:31.620148   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:31.624858   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:31.656222   11368 logs.go:282] 0 containers: []
	W1216 06:05:31.656222   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:31.661373   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:31.691811   11368 logs.go:282] 0 containers: []
	W1216 06:05:31.691863   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:31.691863   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:31.691919   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:31.747691   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:31.747691   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:31.789911   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:31.789911   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:31.875489   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:31.875489   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:31.875489   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:31.916490   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:31.916490   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:31.956674   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:31.956674   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:31.987674   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:31.987674   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:32.044459   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:32.044459   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:32.109226   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:32.109226   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:34.656225   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:34.682222   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:34.726212   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:34.729222   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:34.770212   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:34.774229   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:34.809207   11368 logs.go:282] 0 containers: []
	W1216 06:05:34.809207   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:34.813237   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:34.849218   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:34.853217   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:34.888216   11368 logs.go:282] 0 containers: []
	W1216 06:05:34.888216   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:34.892214   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:34.929212   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:34.933241   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:34.974225   11368 logs.go:282] 0 containers: []
	W1216 06:05:34.974225   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:34.978214   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:35.025216   11368 logs.go:282] 0 containers: []
	W1216 06:05:35.025216   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:35.025216   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:35.025216   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:35.058218   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:35.058218   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:35.122227   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:35.122227   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:35.194219   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:35.194219   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:35.241220   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:35.241220   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:35.345231   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:35.345231   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:35.345231   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:35.393217   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:35.393217   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:35.450569   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:35.450569   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:35.503092   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:35.503621   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:38.055302   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:38.076887   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:38.115584   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:38.119550   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:38.152154   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:38.155881   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:38.185711   11368 logs.go:282] 0 containers: []
	W1216 06:05:38.185771   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:38.189620   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:38.224202   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:38.229521   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:38.266282   11368 logs.go:282] 0 containers: []
	W1216 06:05:38.266282   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:38.270572   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:38.310336   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:38.313346   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:38.341340   11368 logs.go:282] 0 containers: []
	W1216 06:05:38.341340   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:38.344332   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:38.370185   11368 logs.go:282] 0 containers: []
	W1216 06:05:38.370229   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:38.370268   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:38.370317   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:38.413832   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:38.413832   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:38.463259   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:38.463259   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:38.512097   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:38.512097   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:38.559998   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:38.559998   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:38.615706   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:38.615747   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:38.668631   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:38.668631   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:38.786027   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:38.786027   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:38.786027   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:38.826970   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:38.827502   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:41.407058   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:41.432735   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:41.473702   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:41.481551   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:41.515118   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:41.518118   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:41.550110   11368 logs.go:282] 0 containers: []
	W1216 06:05:41.550110   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:41.554103   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:41.589343   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:41.593240   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:41.637093   11368 logs.go:282] 0 containers: []
	W1216 06:05:41.637093   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:41.643890   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:41.683494   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:41.687491   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:41.720505   11368 logs.go:282] 0 containers: []
	W1216 06:05:41.720505   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:41.725496   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:41.769250   11368 logs.go:282] 0 containers: []
	W1216 06:05:41.769250   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:41.769250   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:41.769250   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:41.803244   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:41.803244   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:41.866239   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:41.866239   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:41.901243   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:41.901243   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:41.960842   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:41.960842   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:42.001596   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:42.001596   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:42.042606   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:42.042606   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:42.106280   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:42.106322   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:42.202846   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:42.203388   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:42.203388   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:44.760019   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:44.781486   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:44.820675   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:44.824283   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:44.853548   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:44.857026   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:44.884860   11368 logs.go:282] 0 containers: []
	W1216 06:05:44.884860   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:44.889008   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:44.916630   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:44.920162   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:44.949819   11368 logs.go:282] 0 containers: []
	W1216 06:05:44.949819   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:44.953252   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:44.985266   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:44.988765   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:45.018634   11368 logs.go:282] 0 containers: []
	W1216 06:05:45.018634   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:45.022343   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:45.052253   11368 logs.go:282] 0 containers: []
	W1216 06:05:45.052253   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:45.052253   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:45.052253   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:45.087610   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:45.087610   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:45.169014   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:45.169014   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:45.169014   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:45.210449   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:45.210449   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:45.242921   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:45.242921   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:45.307260   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:45.307260   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:45.358000   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:45.358000   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:45.406251   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:45.406293   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:45.443715   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:45.444689   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:48.000697   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:48.027903   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:48.065363   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:48.068909   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:48.107274   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:48.111545   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:48.150600   11368 logs.go:282] 0 containers: []
	W1216 06:05:48.150600   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:48.155091   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:48.200788   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:48.204641   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:48.236747   11368 logs.go:282] 0 containers: []
	W1216 06:05:48.236747   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:48.239743   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:48.319232   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:48.323174   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:48.360452   11368 logs.go:282] 0 containers: []
	W1216 06:05:48.360452   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:48.364647   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:48.398627   11368 logs.go:282] 0 containers: []
	W1216 06:05:48.398627   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:48.398679   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:48.398731   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:48.464497   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:48.464497   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:48.568063   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:48.568063   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:48.637134   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:48.637134   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:48.686401   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:48.686401   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:48.763310   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:48.763310   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:48.800827   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:48.800827   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:48.844908   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:48.844908   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:48.884341   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:48.884403   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:48.980036   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:51.485193   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:51.510207   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:51.543255   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:51.546986   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:51.583931   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:51.589236   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:51.630772   11368 logs.go:282] 0 containers: []
	W1216 06:05:51.630772   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:51.633765   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:51.664775   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:51.667772   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:51.698005   11368 logs.go:282] 0 containers: []
	W1216 06:05:51.698005   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:51.701712   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:51.730683   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:51.734378   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:51.765212   11368 logs.go:282] 0 containers: []
	W1216 06:05:51.765212   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:51.769752   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:51.807920   11368 logs.go:282] 0 containers: []
	W1216 06:05:51.807920   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:51.807920   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:51.807920   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:51.868907   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:51.868907   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:51.912179   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:51.912179   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:52.002339   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:52.002339   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:52.002339   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:52.048025   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:52.048025   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:52.086852   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:52.086912   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:52.129049   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:52.129049   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:52.172686   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:52.172743   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:52.213307   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:52.213358   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:54.777523   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:54.799529   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:54.831725   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:54.834996   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:54.867058   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:54.870824   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:54.898584   11368 logs.go:282] 0 containers: []
	W1216 06:05:54.898584   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:54.902334   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:54.929632   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:54.933614   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:54.965939   11368 logs.go:282] 0 containers: []
	W1216 06:05:54.965939   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:54.970150   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:55.007002   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:55.010330   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:55.042287   11368 logs.go:282] 0 containers: []
	W1216 06:05:55.042287   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:55.046101   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:55.075358   11368 logs.go:282] 0 containers: []
	W1216 06:05:55.075358   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:55.075358   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:55.075358   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:55.112061   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:55.112061   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:55.192162   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:55.192162   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:55.192162   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:05:55.251159   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:55.251159   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:55.289116   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:55.289116   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:55.318382   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:55.318382   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:55.378494   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:55.378494   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:55.420996   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:55.420996   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:55.460675   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:55.460675   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:58.014535   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:05:58.035691   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:05:58.066401   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:05:58.070001   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:05:58.102384   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:05:58.105843   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:05:58.133113   11368 logs.go:282] 0 containers: []
	W1216 06:05:58.133113   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:05:58.136596   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:05:58.176991   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:05:58.181372   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:05:58.220809   11368 logs.go:282] 0 containers: []
	W1216 06:05:58.220809   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:05:58.227824   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:05:58.256989   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:05:58.261185   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:05:58.291150   11368 logs.go:282] 0 containers: []
	W1216 06:05:58.291150   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:05:58.294508   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:05:58.324981   11368 logs.go:282] 0 containers: []
	W1216 06:05:58.324981   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:05:58.324981   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:05:58.324981   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:05:58.359857   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:05:58.359857   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:05:58.413567   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:05:58.413567   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:05:58.450000   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:05:58.450000   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:05:58.497065   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:05:58.497065   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:05:58.532730   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:05:58.533248   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:05:58.563283   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:05:58.563283   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:05:58.627137   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:05:58.627137   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:05:58.708129   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:05:58.708207   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:05:58.708235   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:01.260108   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:01.283083   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:01.314152   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:01.317832   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:01.365202   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:01.368664   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:01.398957   11368 logs.go:282] 0 containers: []
	W1216 06:06:01.398957   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:01.402394   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:01.434983   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:01.437521   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:01.472139   11368 logs.go:282] 0 containers: []
	W1216 06:06:01.472139   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:01.475353   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:01.508727   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:01.511672   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:01.538866   11368 logs.go:282] 0 containers: []
	W1216 06:06:01.538866   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:01.542639   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:01.573035   11368 logs.go:282] 0 containers: []
	W1216 06:06:01.573035   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:01.573035   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:01.573035   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:01.621796   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:01.621796   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:01.660104   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:01.660104   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:01.704784   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:01.704784   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:01.790527   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:01.790527   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:01.848754   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:01.849275   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:01.909098   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:01.909098   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:01.947227   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:01.947227   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:02.025493   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:02.025567   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:02.025567   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:04.582726   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:04.604322   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:04.638334   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:04.641743   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:04.669563   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:04.673867   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:04.701996   11368 logs.go:282] 0 containers: []
	W1216 06:06:04.702059   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:04.705778   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:04.735078   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:04.738624   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:04.770155   11368 logs.go:282] 0 containers: []
	W1216 06:06:04.770155   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:04.773096   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:04.806236   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:04.809577   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:04.837095   11368 logs.go:282] 0 containers: []
	W1216 06:06:04.837095   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:04.841111   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:04.868803   11368 logs.go:282] 0 containers: []
	W1216 06:06:04.868803   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:04.868891   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:04.868891   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:04.908476   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:04.908476   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:04.941138   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:04.941138   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:05.005994   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:05.005994   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:05.090835   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:05.090835   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:05.090835   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:05.141665   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:05.141665   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:05.184187   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:05.184253   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:05.233437   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:05.233437   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:05.269493   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:05.269493   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:07.817226   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:07.841046   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:07.877845   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:07.881671   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:07.917549   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:07.923454   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:07.964516   11368 logs.go:282] 0 containers: []
	W1216 06:06:07.964516   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:07.967516   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:08.017564   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:08.021708   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:08.049904   11368 logs.go:282] 0 containers: []
	W1216 06:06:08.049904   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:08.053906   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:08.088188   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:08.092166   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:08.120164   11368 logs.go:282] 0 containers: []
	W1216 06:06:08.120164   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:08.123164   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:08.152402   11368 logs.go:282] 0 containers: []
	W1216 06:06:08.152402   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:08.152402   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:08.152402   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:08.205483   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:08.205483   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:08.247976   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:08.247976   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:08.297281   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:08.297319   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:08.334548   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:08.334548   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:08.393289   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:08.393289   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:08.457295   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:08.458312   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:08.494283   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:08.494283   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:08.543287   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:08.543287   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:08.646545   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:11.152671   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:11.174637   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:11.209959   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:11.212640   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:11.243614   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:11.247622   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:11.275615   11368 logs.go:282] 0 containers: []
	W1216 06:06:11.275615   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:11.278633   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:11.310623   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:11.314615   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:11.338639   11368 logs.go:282] 0 containers: []
	W1216 06:06:11.338639   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:11.342859   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:11.375324   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:11.378755   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:11.411568   11368 logs.go:282] 0 containers: []
	W1216 06:06:11.411568   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:11.414557   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:11.441559   11368 logs.go:282] 0 containers: []
	W1216 06:06:11.441559   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:11.441559   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:11.441559   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:11.502564   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:11.502564   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:11.546565   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:11.546565   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:11.600749   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:11.600749   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:11.638686   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:11.638727   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:11.678380   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:11.678380   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:11.738284   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:11.738284   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:11.821595   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:11.821595   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:11.821595   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:11.870197   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:11.870197   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:14.415063   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:14.439734   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:14.472308   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:14.478142   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:14.513380   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:14.517353   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:14.557353   11368 logs.go:282] 0 containers: []
	W1216 06:06:14.557353   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:14.560358   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:14.591950   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:14.597089   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:14.626068   11368 logs.go:282] 0 containers: []
	W1216 06:06:14.626068   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:14.629014   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:14.660363   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:14.663377   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:14.694371   11368 logs.go:282] 0 containers: []
	W1216 06:06:14.694371   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:14.699373   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:14.733372   11368 logs.go:282] 0 containers: []
	W1216 06:06:14.733372   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:14.733372   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:14.733372   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:14.797373   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:14.797373   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:14.835382   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:14.835382   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:14.884221   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:14.884221   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:14.921084   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:14.921084   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:15.009273   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:15.009273   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:15.009273   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:15.050274   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:15.050274   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:15.100286   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:15.100286   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:15.137880   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:15.137880   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:17.690215   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:17.714901   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:17.744481   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:17.747479   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:17.777938   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:17.780927   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:17.809521   11368 logs.go:282] 0 containers: []
	W1216 06:06:17.809521   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:17.813738   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:17.849010   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:17.852009   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:17.881094   11368 logs.go:282] 0 containers: []
	W1216 06:06:17.881094   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:17.884754   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:17.916410   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:17.919402   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:17.948512   11368 logs.go:282] 0 containers: []
	W1216 06:06:17.948512   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:17.952370   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:17.984139   11368 logs.go:282] 0 containers: []
	W1216 06:06:17.984139   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:17.984139   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:17.984139   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:18.029650   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:18.029650   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:18.077378   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:18.077378   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:18.115332   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:18.115332   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:18.167331   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:18.167406   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:18.218363   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:18.218363   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:18.261369   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:18.261369   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:18.290360   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:18.290360   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:18.356946   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:18.356946   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:18.439165   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:20.943514   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:20.966513   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:20.998069   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:21.001093   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:21.032084   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:21.035078   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:21.065079   11368 logs.go:282] 0 containers: []
	W1216 06:06:21.065079   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:21.068078   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:21.102071   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:21.107075   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:21.136366   11368 logs.go:282] 0 containers: []
	W1216 06:06:21.136366   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:21.141404   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:21.178622   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:21.182560   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:21.217297   11368 logs.go:282] 0 containers: []
	W1216 06:06:21.217297   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:21.220299   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:21.253298   11368 logs.go:282] 0 containers: []
	W1216 06:06:21.253298   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:21.253298   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:21.253298   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:21.336305   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:21.336305   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:21.336305   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:21.384305   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:21.384305   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:21.428954   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:21.428954   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:21.464956   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:21.465950   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:21.541513   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:21.541513   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:21.591730   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:21.591730   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:21.657308   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:21.657308   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:21.710076   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:21.710123   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:24.263538   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:24.289172   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:24.326700   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:24.329694   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:24.360723   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:24.363697   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:24.397706   11368 logs.go:282] 0 containers: []
	W1216 06:06:24.397706   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:24.400708   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:24.433714   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:24.437716   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:24.471289   11368 logs.go:282] 0 containers: []
	W1216 06:06:24.471289   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:24.475285   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:24.506286   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:24.510294   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:24.537294   11368 logs.go:282] 0 containers: []
	W1216 06:06:24.537294   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:24.541285   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:24.570123   11368 logs.go:282] 0 containers: []
	W1216 06:06:24.570123   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:24.570123   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:24.570123   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:24.688433   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:24.688433   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:24.688433   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:24.746847   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:24.746847   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:24.784847   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:24.784847   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:24.812846   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:24.812846   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:24.875677   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:24.875677   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:24.920667   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:24.920667   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:24.968674   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:24.968674   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:25.035675   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:25.035675   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:27.574825   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:27.596868   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:27.630036   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:27.633989   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:27.677358   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:27.683643   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:27.711261   11368 logs.go:282] 0 containers: []
	W1216 06:06:27.711261   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:27.716427   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:27.754082   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:27.757757   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:27.784424   11368 logs.go:282] 0 containers: []
	W1216 06:06:27.784424   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:27.789168   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:27.834550   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:27.837541   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:27.867105   11368 logs.go:282] 0 containers: []
	W1216 06:06:27.867105   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:27.870109   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:27.901111   11368 logs.go:282] 0 containers: []
	W1216 06:06:27.901111   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:27.901111   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:27.901111   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:27.968565   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:27.968565   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:28.064158   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:28.064158   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:28.199872   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:28.199934   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:28.199934   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:28.263627   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:28.263627   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:28.309826   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:28.309826   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:28.356472   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:28.356523   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:28.391188   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:28.391188   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:28.439825   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:28.439825   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:30.991142   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:31.012359   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:31.045463   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:31.048785   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:31.080357   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:31.084660   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:31.112556   11368 logs.go:282] 0 containers: []
	W1216 06:06:31.112556   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:31.116337   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:31.152246   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:31.157260   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:31.188624   11368 logs.go:282] 0 containers: []
	W1216 06:06:31.188704   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:31.192236   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:31.232065   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:31.235065   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:31.263065   11368 logs.go:282] 0 containers: []
	W1216 06:06:31.263065   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:31.266054   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:31.296628   11368 logs.go:282] 0 containers: []
	W1216 06:06:31.296628   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:31.296628   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:31.296628   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:31.341000   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:31.341000   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:31.373125   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:31.373125   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:31.421166   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:31.421255   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:31.463109   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:31.463109   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:31.555618   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:31.555618   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:31.555618   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:31.602529   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:31.602529   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:31.642522   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:31.642522   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:31.683646   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:31.683646   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:34.253499   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:34.275401   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:34.305308   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:34.310771   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:34.339825   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:34.344204   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:34.374160   11368 logs.go:282] 0 containers: []
	W1216 06:06:34.374178   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:34.377897   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:34.413245   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:34.416647   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:34.445808   11368 logs.go:282] 0 containers: []
	W1216 06:06:34.445808   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:34.449582   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:34.479427   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:34.482704   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:34.510934   11368 logs.go:282] 0 containers: []
	W1216 06:06:34.511012   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:34.514541   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:34.542760   11368 logs.go:282] 0 containers: []
	W1216 06:06:34.542842   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:34.542842   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:34.542842   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:34.582260   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:34.582296   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:34.616302   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:34.616302   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:34.664567   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:34.664567   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:34.693699   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:34.693699   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:34.745336   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:34.745336   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:34.811542   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:34.811542   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:34.897243   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:34.897243   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:34.897243   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:34.948969   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:34.948969   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:37.497841   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:37.520694   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:37.554249   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:37.558133   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:37.587012   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:37.590406   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:37.621423   11368 logs.go:282] 0 containers: []
	W1216 06:06:37.621423   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:37.625471   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:37.656595   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:37.660598   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:37.689731   11368 logs.go:282] 0 containers: []
	W1216 06:06:37.689763   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:37.694035   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:37.725019   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:37.728348   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:37.774273   11368 logs.go:282] 0 containers: []
	W1216 06:06:37.774273   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:37.777706   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:37.810833   11368 logs.go:282] 0 containers: []
	W1216 06:06:37.810833   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:37.810833   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:37.810919   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:37.854562   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:37.855082   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:37.890231   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:37.890231   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:37.945180   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:37.945180   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:37.993848   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:37.993848   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:38.046377   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:38.046377   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:38.075295   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:38.075295   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:38.127764   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:38.127836   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:38.192879   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:38.192879   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:38.275214   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:40.779838   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:40.803391   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:40.843032   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:40.845978   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:40.877022   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:40.880703   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:40.912814   11368 logs.go:282] 0 containers: []
	W1216 06:06:40.912814   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:40.916604   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:40.951467   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:40.954901   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:40.985231   11368 logs.go:282] 0 containers: []
	W1216 06:06:40.985231   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:40.990041   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:41.019463   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:41.022464   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:41.054364   11368 logs.go:282] 0 containers: []
	W1216 06:06:41.054469   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:41.057974   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:41.085800   11368 logs.go:282] 0 containers: []
	W1216 06:06:41.085877   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:41.085877   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:41.085877   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:41.164124   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:41.164124   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:41.164124   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:41.203334   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:41.203334   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:41.244627   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:41.244627   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:41.284905   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:41.284965   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:41.315311   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:41.315311   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:41.361202   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:41.361202   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:41.403837   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:41.403837   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:41.467253   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:41.467253   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:44.010504   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:44.084834   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:44.118979   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:44.122720   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:44.165302   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:44.168307   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:44.199319   11368 logs.go:282] 0 containers: []
	W1216 06:06:44.199319   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:44.203283   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:44.234299   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:44.238299   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:44.270291   11368 logs.go:282] 0 containers: []
	W1216 06:06:44.270291   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:44.274286   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:44.302286   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:44.305285   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:44.340288   11368 logs.go:282] 0 containers: []
	W1216 06:06:44.340288   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:44.343291   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:44.394820   11368 logs.go:282] 0 containers: []
	W1216 06:06:44.395837   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:44.395837   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:44.395837   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:44.435830   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:44.435830   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:44.518984   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:44.518984   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:44.518984   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:44.566961   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:44.566961   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:44.619969   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:44.619969   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:44.695965   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:44.695965   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:44.748972   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:44.748972   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:44.785632   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:44.785632   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:44.821336   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:44.821336   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:47.382673   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:47.410664   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:47.448282   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:47.452292   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:47.486540   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:47.489757   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:47.518885   11368 logs.go:282] 0 containers: []
	W1216 06:06:47.518885   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:47.522956   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:47.555047   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:47.559015   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:47.596125   11368 logs.go:282] 0 containers: []
	W1216 06:06:47.596125   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:47.600195   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:47.639228   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:47.644118   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:47.690972   11368 logs.go:282] 0 containers: []
	W1216 06:06:47.690972   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:47.695970   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:47.729972   11368 logs.go:282] 0 containers: []
	W1216 06:06:47.729972   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:47.729972   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:47.729972   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:47.787179   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:47.787179   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:47.834172   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:47.834221   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:47.883520   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:47.883520   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:47.923681   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:47.923681   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:47.975193   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:47.975193   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:48.010627   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:48.010627   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:48.070638   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:48.070638   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:48.147620   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:48.148623   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:48.253267   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:50.756426   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:50.776434   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:50.807423   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:50.810429   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:50.847437   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:50.851465   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:50.887441   11368 logs.go:282] 0 containers: []
	W1216 06:06:50.887441   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:50.891437   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:50.926459   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:50.929441   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:50.966443   11368 logs.go:282] 0 containers: []
	W1216 06:06:50.966443   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:50.969448   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:51.000129   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:51.004065   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:51.035304   11368 logs.go:282] 0 containers: []
	W1216 06:06:51.035361   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:51.040647   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:51.072026   11368 logs.go:282] 0 containers: []
	W1216 06:06:51.072026   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:51.072026   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:51.072026   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:51.121039   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:51.121039   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:51.163023   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:51.163023   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:51.266744   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:51.266744   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:51.266744   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:51.312412   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:51.312412   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:51.345446   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:51.345446   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:51.400812   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:51.400812   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:51.481809   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:51.481809   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:51.515807   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:51.515807   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:54.079514   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:54.101932   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:54.133608   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:54.136600   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:54.164017   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:54.167656   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:54.198834   11368 logs.go:282] 0 containers: []
	W1216 06:06:54.198834   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:54.208028   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:54.242618   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:54.246459   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:54.277471   11368 logs.go:282] 0 containers: []
	W1216 06:06:54.277471   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:54.281742   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:54.311737   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:54.316937   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:54.345713   11368 logs.go:282] 0 containers: []
	W1216 06:06:54.345745   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:54.349341   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:54.377542   11368 logs.go:282] 0 containers: []
	W1216 06:06:54.377596   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:54.377639   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:54.377639   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:54.457490   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:54.457490   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:54.493769   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:54.493769   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:54.534784   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:54.534784   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:54.568498   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:54.568498   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:54.611477   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:54.611477   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:54.694576   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:06:54.694612   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:54.694612   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:54.742503   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:54.742503   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:54.781504   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:54.781504   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:57.326676   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:06:57.349097   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:06:57.386926   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:06:57.390245   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:06:57.424958   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:06:57.429966   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:06:57.461447   11368 logs.go:282] 0 containers: []
	W1216 06:06:57.461503   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:06:57.466884   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:06:57.523930   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:06:57.528021   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:06:57.557465   11368 logs.go:282] 0 containers: []
	W1216 06:06:57.557517   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:06:57.561091   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:06:57.587117   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:06:57.591819   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:06:57.622462   11368 logs.go:282] 0 containers: []
	W1216 06:06:57.622462   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:06:57.625817   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:06:57.655584   11368 logs.go:282] 0 containers: []
	W1216 06:06:57.655584   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:06:57.655584   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:06:57.655584   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:06:57.708474   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:06:57.708474   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:06:57.755736   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:06:57.755736   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:06:57.791135   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:06:57.791135   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:06:57.842614   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:06:57.842614   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:06:57.887875   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:06:57.887875   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:06:57.920230   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:06:57.920230   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:06:57.985953   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:06:57.985953   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:06:58.024330   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:06:58.024373   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:06:58.111136   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:00.614928   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:00.634930   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:00.665214   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:00.669140   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:00.701014   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:00.704794   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:00.734253   11368 logs.go:282] 0 containers: []
	W1216 06:07:00.734253   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:00.738069   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:00.771137   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:00.774589   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:00.803502   11368 logs.go:282] 0 containers: []
	W1216 06:07:00.803502   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:00.808406   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:00.837086   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:00.841007   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:00.876062   11368 logs.go:282] 0 containers: []
	W1216 06:07:00.876062   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:00.882539   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:00.913309   11368 logs.go:282] 0 containers: []
	W1216 06:07:00.913309   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:00.913309   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:00.913309   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:00.977928   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:00.977928   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:01.057939   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:01.057939   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:01.105254   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:01.105254   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:01.160353   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:01.160353   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:01.218766   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:01.218766   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:01.298618   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:01.298687   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:01.298687   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:01.345227   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:01.345258   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:01.384913   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:01.384913   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:03.919874   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:03.945550   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:03.982946   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:03.985946   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:04.018777   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:04.022972   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:04.052470   11368 logs.go:282] 0 containers: []
	W1216 06:07:04.052538   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:04.056773   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:04.088939   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:04.093088   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:04.123524   11368 logs.go:282] 0 containers: []
	W1216 06:07:04.123524   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:04.127842   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:04.161678   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:04.166670   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:04.198662   11368 logs.go:282] 0 containers: []
	W1216 06:07:04.198662   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:04.201841   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:04.232446   11368 logs.go:282] 0 containers: []
	W1216 06:07:04.232446   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:04.232446   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:04.232446   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:04.292975   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:04.292975   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:04.380874   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:04.380926   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:04.380926   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:04.433420   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:04.433420   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:04.468127   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:04.468127   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:04.527709   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:04.527709   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:04.563703   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:04.563703   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:04.610837   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:04.610837   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:04.657401   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:04.657401   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:07.199668   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:07.223303   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:07.258400   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:07.264590   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:07.296326   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:07.300414   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:07.328664   11368 logs.go:282] 0 containers: []
	W1216 06:07:07.328664   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:07.337484   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:07.370984   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:07.374495   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:07.402001   11368 logs.go:282] 0 containers: []
	W1216 06:07:07.402001   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:07.406102   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:07.438864   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:07.442867   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:07.468564   11368 logs.go:282] 0 containers: []
	W1216 06:07:07.468564   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:07.472365   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:07.500219   11368 logs.go:282] 0 containers: []
	W1216 06:07:07.500219   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:07.500219   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:07.500219   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:07.584395   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:07.584395   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:07.584395   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:07.629495   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:07.629547   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:07.674353   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:07.674353   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:07.712483   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:07.712483   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:07.745718   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:07.745718   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:07.794350   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:07.794504   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:07.862515   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:07.862515   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:07.903338   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:07.903338   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:10.463987   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:10.487320   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:10.522822   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:10.526401   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:10.567732   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:10.571978   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:10.600807   11368 logs.go:282] 0 containers: []
	W1216 06:07:10.600878   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:10.605181   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:10.643299   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:10.648041   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:10.675696   11368 logs.go:282] 0 containers: []
	W1216 06:07:10.675747   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:10.680327   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:10.714566   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:10.720189   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:10.751702   11368 logs.go:282] 0 containers: []
	W1216 06:07:10.751702   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:10.755306   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:10.782824   11368 logs.go:282] 0 containers: []
	W1216 06:07:10.782896   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:10.782896   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:10.782951   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:10.820593   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:10.820593   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:10.912946   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:10.912946   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:10.912946   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:10.969275   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:10.969275   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:11.012106   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:11.012106   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:11.070331   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:11.070862   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:11.110811   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:11.110811   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:11.173939   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:11.173939   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:11.244725   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:11.244725   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:13.785986   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:13.808051   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:13.841280   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:13.844795   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:13.882387   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:13.885937   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:13.920365   11368 logs.go:282] 0 containers: []
	W1216 06:07:13.920365   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:13.923350   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:13.955343   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:13.959696   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:13.992355   11368 logs.go:282] 0 containers: []
	W1216 06:07:13.992355   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:13.995354   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:14.023355   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:14.027356   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:14.058357   11368 logs.go:282] 0 containers: []
	W1216 06:07:14.058413   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:14.062769   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:14.092438   11368 logs.go:282] 0 containers: []
	W1216 06:07:14.092438   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:14.092438   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:14.092438   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:14.130117   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:14.130117   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:14.217638   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:14.217675   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:14.217708   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:14.266697   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:14.266697   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:14.314896   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:14.314942   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:14.382758   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:14.382758   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:14.425449   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:14.425449   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:14.474837   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:14.474837   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:14.518333   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:14.518333   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:17.056515   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:17.079232   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:17.113407   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:17.119308   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:17.153000   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:17.157094   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:17.184827   11368 logs.go:282] 0 containers: []
	W1216 06:07:17.184827   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:17.189175   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:17.224868   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:17.230718   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:17.263254   11368 logs.go:282] 0 containers: []
	W1216 06:07:17.263254   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:17.268501   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:17.297627   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:17.301617   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:17.335629   11368 logs.go:282] 0 containers: []
	W1216 06:07:17.335629   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:17.339641   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:17.367917   11368 logs.go:282] 0 containers: []
	W1216 06:07:17.367917   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:17.367917   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:17.367917   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:17.411884   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:17.411884   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:17.468584   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:17.468584   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:17.534152   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:17.534152   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:17.572138   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:17.572138   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:17.669152   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:17.669227   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:17.669287   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:17.734769   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:17.734769   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:17.775735   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:17.775735   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:17.806693   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:17.806693   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:20.371248   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:20.392329   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:20.425635   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:20.429049   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:20.462382   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:20.465873   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:20.498474   11368 logs.go:282] 0 containers: []
	W1216 06:07:20.498474   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:20.502287   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:20.533839   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:20.537513   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:20.565181   11368 logs.go:282] 0 containers: []
	W1216 06:07:20.565181   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:20.569004   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:20.598607   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:20.602063   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:20.631797   11368 logs.go:282] 0 containers: []
	W1216 06:07:20.631911   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:20.635730   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:20.668218   11368 logs.go:282] 0 containers: []
	W1216 06:07:20.668218   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:20.668218   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:20.668218   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:20.722758   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:20.722758   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:20.765279   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:20.765279   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:20.813615   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:20.813615   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:20.897709   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:20.897709   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:20.897709   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:20.955567   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:20.955567   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:20.998142   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:20.998142   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:21.028020   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:21.028020   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:21.092746   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:21.092746   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:23.640458   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:23.665750   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:23.702372   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:23.706884   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:23.734769   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:23.738766   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:23.769419   11368 logs.go:282] 0 containers: []
	W1216 06:07:23.769490   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:23.772499   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:23.801495   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:23.805122   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:23.833093   11368 logs.go:282] 0 containers: []
	W1216 06:07:23.833093   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:23.837074   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:23.865001   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:23.868534   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:23.898962   11368 logs.go:282] 0 containers: []
	W1216 06:07:23.898962   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:23.902480   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:23.932833   11368 logs.go:282] 0 containers: []
	W1216 06:07:23.932833   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:23.932883   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:23.932904   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:23.984359   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:23.984359   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:24.027879   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:24.027879   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:24.078551   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:24.078551   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:24.120887   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:24.120957   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:24.152219   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:24.152219   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:24.211944   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:24.211944   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:24.251146   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:24.251146   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:24.326145   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:24.326145   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:24.326145   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:26.878229   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:26.903119   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:26.949555   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:26.953420   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:26.986877   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:26.991041   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:27.017300   11368 logs.go:282] 0 containers: []
	W1216 06:07:27.017342   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:27.021473   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:27.058162   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:27.063316   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:27.095039   11368 logs.go:282] 0 containers: []
	W1216 06:07:27.095039   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:27.098992   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:27.133639   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:27.138479   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:27.181561   11368 logs.go:282] 0 containers: []
	W1216 06:07:27.181561   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:27.185574   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:27.212558   11368 logs.go:282] 0 containers: []
	W1216 06:07:27.212558   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:27.212558   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:27.212558   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:27.255007   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:27.255007   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:27.303106   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:27.303106   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:27.334990   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:27.334990   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:27.439847   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:27.439847   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:27.439847   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:27.527280   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:27.527280   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:27.578468   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:27.578468   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:27.624421   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:27.624421   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:27.672368   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:27.672408   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:30.245711   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:30.269553   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:30.305853   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:30.308852   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:30.352442   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:30.355438   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:30.386933   11368 logs.go:282] 0 containers: []
	W1216 06:07:30.387476   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:30.391441   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:30.423042   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:30.427041   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:30.462055   11368 logs.go:282] 0 containers: []
	W1216 06:07:30.462055   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:30.466040   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:30.509045   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:30.512045   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:30.548280   11368 logs.go:282] 0 containers: []
	W1216 06:07:30.548280   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:30.551274   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:30.579277   11368 logs.go:282] 0 containers: []
	W1216 06:07:30.579277   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:30.579277   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:30.579277   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:30.640275   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:30.640275   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:30.684017   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:30.684017   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:30.718218   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:30.718218   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:30.758107   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:30.758107   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:30.846279   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:30.846279   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:30.846279   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:30.892492   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:30.892492   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:30.938045   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:30.938097   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:30.968453   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:30.968453   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:33.526151   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:33.547993   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:33.581264   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:33.584399   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:33.616420   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:33.620307   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:33.648450   11368 logs.go:282] 0 containers: []
	W1216 06:07:33.648450   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:33.652384   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:33.682763   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:33.686078   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:33.713355   11368 logs.go:282] 0 containers: []
	W1216 06:07:33.713355   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:33.716979   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:33.748515   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:33.751750   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:33.781673   11368 logs.go:282] 0 containers: []
	W1216 06:07:33.781673   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:33.785376   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:33.818771   11368 logs.go:282] 0 containers: []
	W1216 06:07:33.818771   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:33.818858   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:33.818858   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:33.861826   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:33.861826   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:33.907694   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:33.907694   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:33.945933   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:33.945933   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:33.975305   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:33.975305   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:34.030718   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:34.030799   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:34.096587   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:34.096587   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:34.133063   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:34.133063   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:34.261233   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:34.261233   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:34.261233   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:36.813159   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:36.835003   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:36.868078   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:36.871819   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:36.900524   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:36.904211   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:36.931600   11368 logs.go:282] 0 containers: []
	W1216 06:07:36.931657   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:36.935312   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:36.965626   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:36.969572   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:36.998700   11368 logs.go:282] 0 containers: []
	W1216 06:07:36.998700   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:37.002525   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:37.052253   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:37.058351   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:37.089933   11368 logs.go:282] 0 containers: []
	W1216 06:07:37.089933   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:37.094228   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:37.125134   11368 logs.go:282] 0 containers: []
	W1216 06:07:37.125134   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:37.125134   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:37.125134   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:37.182838   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:37.182838   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:37.256173   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:37.256173   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:37.300677   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:37.300677   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:37.340698   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:37.340761   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:37.374900   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:37.374900   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:37.459168   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:37.459168   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:37.459168   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:37.499612   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:37.499612   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:37.542358   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:37.542358   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:40.081189   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:40.109345   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:40.143577   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:40.147288   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:40.180471   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:40.185233   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:40.221474   11368 logs.go:282] 0 containers: []
	W1216 06:07:40.221504   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:40.226401   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:40.257232   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:40.260779   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:40.289057   11368 logs.go:282] 0 containers: []
	W1216 06:07:40.289057   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:40.293421   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:40.330221   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:40.333278   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:40.376085   11368 logs.go:282] 0 containers: []
	W1216 06:07:40.376085   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:40.380273   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:40.415281   11368 logs.go:282] 0 containers: []
	W1216 06:07:40.415281   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:40.415281   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:40.415281   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:40.492996   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:40.492996   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:40.492996   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:40.539848   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:40.539848   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:40.586738   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:40.586738   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:40.710626   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:40.710626   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:40.770609   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:40.770609   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:40.805899   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:40.805899   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:40.863952   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:40.863952   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:40.907961   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:40.907961   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:43.467881   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:43.487053   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:43.517060   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:43.520056   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:43.552053   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:43.555055   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:43.584053   11368 logs.go:282] 0 containers: []
	W1216 06:07:43.584053   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:43.587057   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:43.621058   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:43.624067   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:43.656872   11368 logs.go:282] 0 containers: []
	W1216 06:07:43.656872   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:43.659847   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:43.688849   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:43.692848   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:43.724848   11368 logs.go:282] 0 containers: []
	W1216 06:07:43.724848   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:43.727849   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:43.757851   11368 logs.go:282] 0 containers: []
	W1216 06:07:43.757851   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:43.757851   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:43.757851   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:43.795235   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:43.795235   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:43.889181   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:43.889181   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:43.889181   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:43.933986   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:43.933986   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:43.994767   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:43.994767   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:44.054120   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:44.054120   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:44.100531   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:44.100531   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:44.144399   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:44.144399   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:44.191337   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:44.191337   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:46.724295   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:46.744298   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:46.779201   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:46.782040   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:46.811107   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:46.814084   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:46.842856   11368 logs.go:282] 0 containers: []
	W1216 06:07:46.842856   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:46.847104   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:46.881204   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:46.886375   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:46.926508   11368 logs.go:282] 0 containers: []
	W1216 06:07:46.926508   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:46.929517   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:46.972346   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:46.977337   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:47.027839   11368 logs.go:282] 0 containers: []
	W1216 06:07:47.027839   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:47.032678   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:47.066174   11368 logs.go:282] 0 containers: []
	W1216 06:07:47.066174   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:47.066174   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:47.066174   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:47.094126   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:47.094126   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:47.155358   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:47.155358   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:47.196192   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:47.196192   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:47.283817   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:47.284812   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:47.284812   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:47.331488   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:47.331488   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:47.368488   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:47.368488   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:47.419805   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:47.419863   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:47.469587   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:47.469587   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:50.021944   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:50.043836   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:50.075911   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:50.079637   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:50.105911   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:50.109332   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:50.137369   11368 logs.go:282] 0 containers: []
	W1216 06:07:50.137369   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:50.141460   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:50.170179   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:50.174036   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:50.202601   11368 logs.go:282] 0 containers: []
	W1216 06:07:50.202601   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:50.207078   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:50.239742   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:50.243371   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:50.270111   11368 logs.go:282] 0 containers: []
	W1216 06:07:50.270111   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:50.273104   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:50.301846   11368 logs.go:282] 0 containers: []
	W1216 06:07:50.301846   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:50.301846   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:50.301846   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:50.347097   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:50.347325   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:50.411994   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:50.411994   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:50.447098   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:50.447098   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:50.499347   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:50.499347   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:50.533360   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:50.533423   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:50.611768   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:50.611768   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:50.611768   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:50.659396   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:50.659441   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:50.715292   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:50.715292   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:53.260838   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:53.283992   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:53.319933   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:53.323716   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:53.363356   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:53.367148   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:53.408361   11368 logs.go:282] 0 containers: []
	W1216 06:07:53.408361   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:53.411910   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:53.445351   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:53.450190   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:53.481887   11368 logs.go:282] 0 containers: []
	W1216 06:07:53.481959   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:53.486426   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:53.520628   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:53.525583   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:53.558807   11368 logs.go:282] 0 containers: []
	W1216 06:07:53.558849   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:53.562612   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:53.591803   11368 logs.go:282] 0 containers: []
	W1216 06:07:53.591803   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:53.591803   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:53.591803   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:53.633761   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:53.634296   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:53.686984   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:53.686984   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:53.740356   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:53.740356   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:53.797088   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:53.797168   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:53.886871   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:53.886871   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:53.976048   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:53.976048   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:53.976048   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:54.023663   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:54.023663   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:54.066861   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:54.066861   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:56.603937   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:56.628665   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:07:56.663235   11368 logs.go:282] 1 containers: [8466efe0438e]
	I1216 06:07:56.667223   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:07:56.701629   11368 logs.go:282] 1 containers: [b5eb8faf39e0]
	I1216 06:07:56.705466   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:07:56.742852   11368 logs.go:282] 0 containers: []
	W1216 06:07:56.742852   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:07:56.750384   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:07:56.783234   11368 logs.go:282] 1 containers: [cf492f615c62]
	I1216 06:07:56.786239   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:07:56.817257   11368 logs.go:282] 0 containers: []
	W1216 06:07:56.817257   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:07:56.822252   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:07:56.852776   11368 logs.go:282] 1 containers: [06ae27c05587]
	I1216 06:07:56.858398   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:07:56.888429   11368 logs.go:282] 0 containers: []
	W1216 06:07:56.888429   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:07:56.892158   11368 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1216 06:07:56.926166   11368 logs.go:282] 0 containers: []
	W1216 06:07:56.926166   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:07:56.926166   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:07:56.926166   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:07:56.958856   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:07:56.958856   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:07:57.020366   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:07:57.020366   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:07:57.087425   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:07:57.087425   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:07:57.134011   11368 logs.go:123] Gathering logs for kube-apiserver [8466efe0438e] ...
	I1216 06:07:57.134011   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8466efe0438e"
	I1216 06:07:57.191003   11368 logs.go:123] Gathering logs for kube-scheduler [cf492f615c62] ...
	I1216 06:07:57.192000   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf492f615c62"
	I1216 06:07:57.236180   11368 logs.go:123] Gathering logs for kube-controller-manager [06ae27c05587] ...
	I1216 06:07:57.236180   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06ae27c05587"
	I1216 06:07:57.278896   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:07:57.278896   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:07:57.362278   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:07:57.362278   11368 logs.go:123] Gathering logs for etcd [b5eb8faf39e0] ...
	I1216 06:07:57.362278   11368 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5eb8faf39e0"
	I1216 06:07:59.916774   11368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:07:59.937220   11368 kubeadm.go:602] duration metric: took 4m3.2860348s to restartPrimaryControlPlane
	W1216 06:07:59.937301   11368 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 06:07:59.941847   11368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 06:08:00.597226   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:08:00.625624   11368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:08:00.639577   11368 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:08:00.643732   11368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:08:00.655113   11368 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:08:00.655113   11368 kubeadm.go:158] found existing configuration files:
	
	I1216 06:08:00.660591   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:08:00.672859   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:08:00.676824   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:08:00.694141   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:08:00.706148   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:08:00.713297   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:08:00.733484   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:08:00.747476   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:08:00.750922   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:08:00.766617   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:08:00.778925   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:08:00.782665   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:08:00.800431   11368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:08:00.924115   11368 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:08:01.000082   11368 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:08:01.102328   11368 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:12:02.114874   11368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:12:02.115036   11368 kubeadm.go:319] 
	I1216 06:12:02.115323   11368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:12:02.119332   11368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:12:02.119332   11368 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:12:02.120135   11368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:12:02.120135   11368 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:12:02.120135   11368 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:12:02.120871   11368 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:12:02.121013   11368 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:12:02.121192   11368 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:12:02.122017   11368 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:12:02.122194   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:12:02.122408   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:12:02.122510   11368 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:12:02.122753   11368 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:12:02.122840   11368 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:12:02.123033   11368 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:12:02.123163   11368 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:12:02.123310   11368 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:12:02.123421   11368 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:12:02.123572   11368 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:12:02.123980   11368 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:12:02.124094   11368 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] OS: Linux
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:12:02.124933   11368 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:12:02.125112   11368 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:12:02.125304   11368 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:12:02.125449   11368 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:12:02.125567   11368 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:12:02.125730   11368 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:12:02.125853   11368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:12:02.125853   11368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:12:02.126387   11368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:12:02.126558   11368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:12:02.407594   11368 out.go:252]   - Generating certificates and keys ...
	I1216 06:12:02.407968   11368 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:12:02.408113   11368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:12:02.408288   11368 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:12:02.408453   11368 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:12:02.408673   11368 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:12:02.408815   11368 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:12:02.408921   11368 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:12:02.409054   11368 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:12:02.409210   11368 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:12:02.409444   11368 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:12:02.409514   11368 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:12:02.409673   11368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:12:02.409749   11368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:12:02.409903   11368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:12:02.410062   11368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:12:02.410138   11368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:12:02.410298   11368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:12:02.410526   11368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:12:02.410600   11368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:12:02.453808   11368 out.go:252]   - Booting up control plane ...
	I1216 06:12:02.454792   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:12:02.455026   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:12:02.455098   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:12:02.455292   11368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:12:02.455588   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:12:02.455804   11368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:12:02.455984   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:12:02.456047   11368 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:12:02.456475   11368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:12:02.456689   11368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:12:02.456759   11368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000829212s
	I1216 06:12:02.456833   11368 kubeadm.go:319] 
	I1216 06:12:02.456918   11368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:12:02.457018   11368 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:12:02.457186   11368 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:12:02.457264   11368 kubeadm.go:319] 
	I1216 06:12:02.457466   11368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:12:02.457538   11368 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:12:02.457617   11368 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:12:02.457681   11368 kubeadm.go:319] 
	W1216 06:12:02.457840   11368 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829212s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829212s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:12:02.460957   11368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 06:12:02.923334   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:12:02.942284   11368 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:12:02.947934   11368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:12:02.960033   11368 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:12:02.960033   11368 kubeadm.go:158] found existing configuration files:
	
	I1216 06:12:02.963699   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:12:02.976249   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:12:02.980398   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:12:02.996745   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:12:03.010587   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:12:03.014857   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:12:03.033804   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:12:03.047258   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:12:03.052529   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:12:03.071112   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:12:03.084411   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:12:03.089634   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:12:03.107865   11368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:12:03.217980   11368 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:12:03.304403   11368 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:12:03.402507   11368 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:16:04.221088   11368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 06:16:04.221196   11368 kubeadm.go:319] 
	I1216 06:16:04.221440   11368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:16:04.223812   11368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:16:04.223812   11368 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:16:04.223812   11368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:16:04.223812   11368 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:16:04.224501   11368 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:16:04.224501   11368 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:16:04.224501   11368 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:16:04.224501   11368 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:16:04.224501   11368 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:16:04.225071   11368 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:16:04.225168   11368 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:16:04.225265   11368 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:16:04.225349   11368 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:16:04.225527   11368 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:16:04.225527   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:16:04.225527   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:16:04.225527   11368 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:16:04.226047   11368 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:16:04.226187   11368 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:16:04.226369   11368 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:16:04.226489   11368 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:16:04.226625   11368 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:16:04.226784   11368 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:16:04.226980   11368 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:16:04.226980   11368 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:16:04.226980   11368 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:16:04.226980   11368 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:16:04.226980   11368 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:16:04.227650   11368 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:16:04.227732   11368 kubeadm.go:319] OS: Linux
	I1216 06:16:04.227833   11368 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:16:04.228009   11368 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:16:04.228204   11368 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:16:04.228247   11368 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:16:04.228247   11368 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:16:04.228247   11368 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:16:04.228247   11368 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:16:04.228247   11368 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:16:04.228818   11368 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:16:04.229066   11368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:16:04.229179   11368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:16:04.229449   11368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:16:04.229583   11368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:16:04.232099   11368 out.go:252]   - Generating certificates and keys ...
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:16:04.234106   11368 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:16:04.234106   11368 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:16:04.235106   11368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:16:04.235106   11368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:16:04.253780   11368 out.go:252]   - Booting up control plane ...
	I1216 06:16:04.254772   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:16:04.255084   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:16:04.255255   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:16:04.255540   11368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:16:04.255851   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:16:04.255940   11368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:16:04.255940   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:16:04.255940   11368 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:16:04.256639   11368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:16:04.256698   11368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:16:04.256698   11368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000618293s
	I1216 06:16:04.256698   11368 kubeadm.go:319] 
	I1216 06:16:04.256698   11368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:16:04.257241   11368 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:16:04.257505   11368 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:16:04.257554   11368 kubeadm.go:319] 
	I1216 06:16:04.257808   11368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:16:04.257951   11368 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:16:04.258007   11368 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:16:04.258072   11368 kubeadm.go:319] 
	I1216 06:16:04.258136   11368 kubeadm.go:403] duration metric: took 12m7.6565804s to StartCluster
	I1216 06:16:04.258244   11368 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:16:04.262878   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:16:04.325517   11368 cri.go:89] found id: ""
	I1216 06:16:04.325517   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.325517   11368 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:16:04.325517   11368 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:16:04.329515   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:16:04.383148   11368 cri.go:89] found id: ""
	I1216 06:16:04.384150   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.384150   11368 logs.go:284] No container was found matching "etcd"
	I1216 06:16:04.384150   11368 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:16:04.388147   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:16:04.431752   11368 cri.go:89] found id: ""
	I1216 06:16:04.432746   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.432746   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:16:04.432746   11368 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:16:04.436778   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:16:04.480357   11368 cri.go:89] found id: ""
	I1216 06:16:04.480357   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.480357   11368 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:16:04.480357   11368 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:16:04.485913   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:16:04.537648   11368 cri.go:89] found id: ""
	I1216 06:16:04.537701   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.537701   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:16:04.537701   11368 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:16:04.542506   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:16:04.583719   11368 cri.go:89] found id: ""
	I1216 06:16:04.583800   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.583800   11368 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:16:04.583800   11368 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:16:04.588223   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:16:04.633250   11368 cri.go:89] found id: ""
	I1216 06:16:04.633250   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.633322   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:16:04.633322   11368 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 06:16:04.637215   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 06:16:04.698406   11368 cri.go:89] found id: ""
	I1216 06:16:04.698406   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.698406   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:16:04.698406   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:16:04.698406   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:16:04.766417   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:16:04.766417   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:16:04.807190   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:16:04.807190   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:16:04.905378   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:16:04.905378   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:16:04.905378   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:16:04.939415   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:16:04.939415   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:16:04.988337   11368 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000618293s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:16:04.988337   11368 out.go:285] * 
	* 
	W1216 06:16:04.988337   11368 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000618293s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000618293s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:16:04.988337   11368 out.go:285] * 
	* 
	W1216 06:16:04.992326   11368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:16:04.998327   11368 out.go:203] 
	W1216 06:16:05.003329   11368 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000618293s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000618293s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:16:05.003329   11368 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:16:05.003329   11368 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:16:05.006336   11368 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-633300 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-633300 version --output=json
E1216 06:16:07.271723   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:16:12.393335   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-633300 version --output=json: exit status 1 (10.1789855s)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "34",
	    "gitVersion": "v1.34.3",
	    "gitCommit": "df11db1c0f08fab3c0baee1e5ce6efbf816af7f1",
	    "gitTreeState": "clean",
	    "buildDate": "2025-12-09T15:06:39Z",
	    "goVersion": "go1.24.11",
	    "compiler": "gc",
	    "platform": "windows/amd64"
	  },
	  "kustomizeVersion": "v5.7.1"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-16 06:16:16.52359 +0000 UTC m=+6613.763164901
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-633300
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-633300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3115163a7cbeed231bc795583ba7bf6233b954853c1b530baa6e06c31753bbe4",
	        "Created": "2025-12-16T06:02:47.555965666Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303019,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:03:32.546318748Z",
	            "FinishedAt": "2025-12-16T06:03:30.463127585Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/3115163a7cbeed231bc795583ba7bf6233b954853c1b530baa6e06c31753bbe4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3115163a7cbeed231bc795583ba7bf6233b954853c1b530baa6e06c31753bbe4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3115163a7cbeed231bc795583ba7bf6233b954853c1b530baa6e06c31753bbe4/hosts",
	        "LogPath": "/var/lib/docker/containers/3115163a7cbeed231bc795583ba7bf6233b954853c1b530baa6e06c31753bbe4/3115163a7cbeed231bc795583ba7bf6233b954853c1b530baa6e06c31753bbe4-json.log",
	        "Name": "/kubernetes-upgrade-633300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-633300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-633300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ab5db9f492f80aba4fdadd5e07c42cfe8ccaa88101bcb845704c5df516c77de3-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ab5db9f492f80aba4fdadd5e07c42cfe8ccaa88101bcb845704c5df516c77de3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ab5db9f492f80aba4fdadd5e07c42cfe8ccaa88101bcb845704c5df516c77de3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ab5db9f492f80aba4fdadd5e07c42cfe8ccaa88101bcb845704c5df516c77de3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-633300",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-633300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-633300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-633300",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-633300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c52bd36ab85b2f8a4c77a7c9ad79c7e4ae1a0b08829a1ae6d3ff570548c43932",
	            "SandboxKey": "/var/run/docker/netns/c52bd36ab85b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54080"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54081"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54082"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-633300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "16e84f54069139722ea3235f8dbd86db1d474cfd787d38aee68f0524b526f4e0",
	                    "EndpointID": "80ecaadb7dae95e0367c4dd71b67d6953239b2d97762dc667b6320774e35dcc2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-633300",
	                        "3115163a7cbe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-633300 -n kubernetes-upgrade-633300
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-633300 -n kubernetes-upgrade-633300: exit status 2 (595.6749ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-633300 logs -n 25
E1216 06:16:18.321127   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-633300 logs -n 25: (1.1751423s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                   │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-030800 sudo systemctl status docker --all --full --no-pager                                                                  │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl cat docker --no-pager                                                                                  │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/docker/daemon.json                                                                                      │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo docker system info                                                                                               │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl status cri-docker --all --full --no-pager                                                              │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl cat cri-docker --no-pager                                                                              │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                         │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-686300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-686300 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ ssh     │ -p kindnet-030800 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                   │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cri-dockerd --version                                                                                            │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl status containerd --all --full --no-pager                                                              │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl cat containerd --no-pager                                                                              │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /lib/systemd/system/containerd.service                                                                       │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/containerd/config.toml                                                                                  │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo containerd config dump                                                                                           │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl status crio --all --full --no-pager                                                                    │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ ssh     │ -p kindnet-030800 sudo systemctl cat crio --no-pager                                                                                    │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                          │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo crio config                                                                                                      │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ delete  │ -p kindnet-030800                                                                                                                       │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:14 UTC │
	│ start   │ -p calico-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker                            │ calico-030800     │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:14 UTC │ 16 Dec 25 06:15 UTC │
	│ stop    │ -p no-preload-686300 --alsologtostderr -v=3                                                                                             │ no-preload-686300 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:15 UTC │ 16 Dec 25 06:15 UTC │
	│ addons  │ enable dashboard -p no-preload-686300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                            │ no-preload-686300 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:15 UTC │ 16 Dec 25 06:15 UTC │
	│ start   │ -p no-preload-686300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0    │ no-preload-686300 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:15 UTC │                     │
	│ ssh     │ -p calico-030800 pgrep -a kubelet                                                                                                       │ calico-030800     │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:15:48
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:15:47.979577    2100 out.go:360] Setting OutFile to fd 1972 ...
	I1216 06:15:48.028755    2100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:15:48.028755    2100 out.go:374] Setting ErrFile to fd 1148...
	I1216 06:15:48.028755    2100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:15:48.043433    2100 out.go:368] Setting JSON to false
	I1216 06:15:48.046345    2100 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6769,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:15:48.046345    2100 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:15:48.049346    2100 out.go:179] * [no-preload-686300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:15:48.053458    2100 notify.go:221] Checking for updates...
	I1216 06:15:48.053458    2100 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:15:48.057632    2100 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:15:48.061089    2100 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:15:48.067048    2100 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:15:48.073242    2100 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:15:48.078055    2100 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:15:48.078831    2100 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:15:48.203359    2100 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:15:48.206357    2100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:15:48.473036    2100 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:15:48.450762731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:15:48.477180    2100 out.go:179] * Using the docker driver based on existing profile
	I1216 06:15:48.483381    2100 start.go:309] selected driver: docker
	I1216 06:15:48.483381    2100 start.go:927] validating driver "docker" against &{Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:15:48.483625    2100 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:15:48.579070    2100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:15:48.841358    2100 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:15:48.817390559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:15:48.842360    2100 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:15:48.842360    2100 cni.go:84] Creating CNI manager for ""
	I1216 06:15:48.842360    2100 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:15:48.842360    2100 start.go:353] cluster config:
	{Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:15:48.845357    2100 out.go:179] * Starting "no-preload-686300" primary control-plane node in "no-preload-686300" cluster
	I1216 06:15:48.847358    2100 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:15:48.850678    2100 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:15:48.852808    2100 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:15:48.852808    2100 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:15:48.853054    2100 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\config.json ...
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1216 06:15:49.196661    2100 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:15:49.196661    2100 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:15:49.196740    2100 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:15:49.196740    2100 start.go:360] acquireMachinesLock for no-preload-686300: {Name:mk990048edb42dd06e1fb0f2c86d8b2d42a7457e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:49.197120    2100 start.go:364] duration metric: took 270.7µs to acquireMachinesLock for "no-preload-686300"
	I1216 06:15:49.197296    2100 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:15:49.197316    2100 fix.go:54] fixHost starting: 
	I1216 06:15:49.209064    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:15:49.275055    2100 fix.go:112] recreateIfNeeded on no-preload-686300: state=Stopped err=<nil>
	W1216 06:15:49.275055    2100 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:15:49.279060    2100 out.go:252] * Restarting existing docker container for "no-preload-686300" ...
	I1216 06:15:49.285061    2100 cli_runner.go:164] Run: docker start no-preload-686300
	I1216 06:15:50.984124    2100 cli_runner.go:217] Completed: docker start no-preload-686300: (1.6990396s)
	I1216 06:15:50.997032    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:15:51.085214    2100 kic.go:430] container "no-preload-686300" state is running.
	I1216 06:15:51.092218    2100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-686300
	I1216 06:15:51.171055    2100 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\config.json ...
	I1216 06:15:51.173066    2100 machine.go:94] provisionDockerMachine start ...
	I1216 06:15:51.179056    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:51.268482    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:51.269474    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:51.269474    2100 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:15:51.272603    2100 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 06:15:52.151550    2100 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.151601    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1216 06:15:52.151601    2100 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.29827s
	I1216 06:15:52.151601    2100 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1216 06:15:52.167356    2100 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.167649    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1216 06:15:52.167649    2100 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.3143177s
	I1216 06:15:52.167649    2100 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1216 06:15:52.181257    2100 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.181257    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1216 06:15:52.182247    2100 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.3289154s
	I1216 06:15:52.182247    2100 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1216 06:15:52.196083    2100 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.196878    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1216 06:15:52.197497    2100 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.3441648s
	I1216 06:15:52.197541    2100 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1216 06:15:52.210930    2100 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.211765    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1216 06:15:52.211765    2100 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.3584325s
	I1216 06:15:52.211765    2100 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1216 06:15:52.235265    2100 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.235388    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1216 06:15:52.235388    2100 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.3820559s
	I1216 06:15:52.235388    2100 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1216 06:15:52.248968    2100 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.249896    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1216 06:15:52.249896    2100 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.3965632s
	I1216 06:15:52.249896    2100 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1216 06:15:52.258600    2100 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.258600    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1216 06:15:52.258600    2100 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.4052668s
	I1216 06:15:52.258600    2100 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1216 06:15:52.258600    2100 cache.go:87] Successfully saved all images to host disk.
	I1216 06:15:55.189488   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:55.189488   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:55.189488   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Running
	I1216 06:15:55.189488   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Running
	I1216 06:15:55.189488   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:55.189488   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:55.189488   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:55.189488   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:55.189488   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:55.189488   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Running
	I1216 06:15:55.189488   11692 system_pods.go:126] duration metric: took 43.4614064s to wait for k8s-apps to be running ...
	I1216 06:15:55.189488   11692 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:15:55.194510   11692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:15:55.219092   11692 system_svc.go:56] duration metric: took 29.6033ms WaitForService to wait for kubelet
	I1216 06:15:55.219092   11692 kubeadm.go:587] duration metric: took 1m2.0346142s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:15:55.219092   11692 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:15:55.225431   11692 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:15:55.225431   11692 node_conditions.go:123] node cpu capacity is 16
	I1216 06:15:55.225431   11692 node_conditions.go:105] duration metric: took 6.3391ms to run NodePressure ...
	I1216 06:15:55.225431   11692 start.go:242] waiting for startup goroutines ...
	I1216 06:15:55.225802   11692 start.go:247] waiting for cluster config update ...
	I1216 06:15:55.225952   11692 start.go:256] writing updated cluster config ...
	I1216 06:15:55.234554   11692 ssh_runner.go:195] Run: rm -f paused
	I1216 06:15:55.243141   11692 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:15:55.250149   11692 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-j7vnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:55.261144   11692 pod_ready.go:94] pod "coredns-66bc5c9577-j7vnq" is "Ready"
	I1216 06:15:55.261144   11692 pod_ready.go:86] duration metric: took 10.0003ms for pod "coredns-66bc5c9577-j7vnq" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:55.266146   11692 pod_ready.go:83] waiting for pod "etcd-calico-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:55.275144   11692 pod_ready.go:94] pod "etcd-calico-030800" is "Ready"
	I1216 06:15:55.275144   11692 pod_ready.go:86] duration metric: took 8.9975ms for pod "etcd-calico-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:55.280165   11692 pod_ready.go:83] waiting for pod "kube-apiserver-calico-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:55.289147   11692 pod_ready.go:94] pod "kube-apiserver-calico-030800" is "Ready"
	I1216 06:15:55.289147   11692 pod_ready.go:86] duration metric: took 8.9823ms for pod "kube-apiserver-calico-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:55.294152   11692 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:55.651953   11692 pod_ready.go:94] pod "kube-controller-manager-calico-030800" is "Ready"
	I1216 06:15:55.651953   11692 pod_ready.go:86] duration metric: took 357.7955ms for pod "kube-controller-manager-calico-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:55.850886   11692 pod_ready.go:83] waiting for pod "kube-proxy-qdm7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:56.251839   11692 pod_ready.go:94] pod "kube-proxy-qdm7q" is "Ready"
	I1216 06:15:56.251839   11692 pod_ready.go:86] duration metric: took 400.9476ms for pod "kube-proxy-qdm7q" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:56.474519   11692 pod_ready.go:83] waiting for pod "kube-scheduler-calico-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:56.864552   11692 pod_ready.go:94] pod "kube-scheduler-calico-030800" is "Ready"
	I1216 06:15:56.864552   11692 pod_ready.go:86] duration metric: took 389.9717ms for pod "kube-scheduler-calico-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:15:56.864552   11692 pod_ready.go:40] duration metric: took 1.6213887s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:15:56.969552   11692 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:15:56.972552   11692 out.go:179] * Done! kubectl is now configured to use "calico-030800" cluster and "default" namespace by default
	I1216 06:15:54.446040    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-686300
	
	I1216 06:15:54.446581    2100 ubuntu.go:182] provisioning hostname "no-preload-686300"
	I1216 06:15:54.449644    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:54.512628    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:54.513628    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:54.513628    2100 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-686300 && echo "no-preload-686300" | sudo tee /etc/hostname
	I1216 06:15:54.720738    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-686300
	
	I1216 06:15:54.724727    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:54.784732    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:54.785726    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:54.785726    2100 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-686300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-686300/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-686300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:15:54.947054    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:15:54.947054    2100 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:15:54.947054    2100 ubuntu.go:190] setting up certificates
	I1216 06:15:54.947054    2100 provision.go:84] configureAuth start
	I1216 06:15:54.952073    2100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-686300
	I1216 06:15:55.012040    2100 provision.go:143] copyHostCerts
	I1216 06:15:55.012040    2100 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:15:55.012040    2100 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:15:55.013040    2100 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:15:55.014046    2100 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:15:55.014046    2100 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:15:55.014046    2100 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:15:55.015058    2100 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:15:55.015058    2100 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:15:55.015058    2100 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:15:55.016069    2100 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-686300 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-686300]
	I1216 06:15:55.208500    2100 provision.go:177] copyRemoteCerts
	I1216 06:15:55.214092    2100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:15:55.219092    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:55.292153    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:15:55.413148    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 06:15:55.445511    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:15:55.475517    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:15:55.500524    2100 provision.go:87] duration metric: took 553.4625ms to configureAuth
	I1216 06:15:55.500524    2100 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:15:55.501526    2100 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:15:55.505517    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:55.571516    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:55.571516    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:55.571516    2100 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:15:55.757941    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:15:55.757941    2100 ubuntu.go:71] root file system type: overlay
	I1216 06:15:55.757941    2100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:15:55.762931    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:55.827909    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:55.828738    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:55.828906    2100 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:15:56.009883    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:15:56.013880    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:56.082328    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:56.082328    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:56.082328    2100 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:15:56.266049    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:15:56.266049    2100 machine.go:97] duration metric: took 5.0929137s to provisionDockerMachine
	I1216 06:15:56.266049    2100 start.go:293] postStartSetup for "no-preload-686300" (driver="docker")
	I1216 06:15:56.266049    2100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:15:56.270047    2100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:15:56.275284    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:56.332073    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:15:56.466795    2100 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:15:56.478406    2100 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:15:56.478406    2100 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:15:56.478406    2100 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:15:56.478406    2100 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:15:56.479018    2100 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:15:56.483909    2100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:15:56.495492    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:15:56.519881    2100 start.go:296] duration metric: took 253.8281ms for postStartSetup
	I1216 06:15:56.524684    2100 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:15:56.527977    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:56.580650    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:15:56.705030    2100 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:15:56.713180    2100 fix.go:56] duration metric: took 7.5157617s for fixHost
	I1216 06:15:56.713180    2100 start.go:83] releasing machines lock for "no-preload-686300", held for 7.5159578s
	I1216 06:15:56.717871    2100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-686300
	I1216 06:15:56.776327    2100 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:15:56.780319    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:56.780319    2100 ssh_runner.go:195] Run: cat /version.json
	I1216 06:15:56.784319    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:56.837565    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:15:56.838791    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	W1216 06:15:56.954554    2100 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:15:56.960543    2100 ssh_runner.go:195] Run: systemctl --version
	I1216 06:15:56.975699    2100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:15:56.987286    2100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:15:56.992118    2100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:15:57.010839    2100 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:15:57.010839    2100 start.go:496] detecting cgroup driver to use...
	I1216 06:15:57.010839    2100 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:15:57.010839    2100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:15:57.037138    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 06:15:57.055131    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1216 06:15:57.067133    2100 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:15:57.067133    2100 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:15:57.069137    2100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:15:57.073127    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:15:57.091129    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:15:57.110137    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:15:57.128137    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:15:57.146128    2100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:15:57.163143    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:15:57.180135    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:15:57.196159    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:15:57.212135    2100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:15:57.227784    2100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:15:57.243311    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:15:57.379464    2100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:15:57.540234    2100 start.go:496] detecting cgroup driver to use...
	I1216 06:15:57.540234    2100 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:15:57.545024    2100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:15:57.569426    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:15:57.589440    2100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:15:57.641338    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:15:57.665986    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:15:57.688905    2100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:15:57.713525    2100 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:15:57.725526    2100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:15:57.736520    2100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 06:15:57.759338    2100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:15:57.866852    2100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:15:57.971868    2100 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:15:57.971868    2100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:15:58.001025    2100 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:15:58.022627    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:15:58.177131    2100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:16:00.683636    2100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5064702s)
	I1216 06:16:00.688431    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:16:00.709648    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:16:00.734513    2100 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 06:16:00.757614    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:16:00.780216    2100 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:16:00.916626    2100 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:16:01.079943    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:16:01.218481    2100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:16:01.242669    2100 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:16:01.266933    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:16:01.411497    2100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:16:01.516769    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:16:01.533957    2100 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:16:01.538498    2100 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:16:01.546160    2100 start.go:564] Will wait 60s for crictl version
	I1216 06:16:01.550366    2100 ssh_runner.go:195] Run: which crictl
	I1216 06:16:01.561419    2100 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:16:01.603331    2100 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:16:01.607249    2100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:16:01.653369    2100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:16:01.695223    2100 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 06:16:01.701025    2100 cli_runner.go:164] Run: docker exec -t no-preload-686300 dig +short host.docker.internal
	I1216 06:16:01.830212    2100 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:16:01.834723    2100 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:16:01.841898    2100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:16:01.861623    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:01.916448    2100 kubeadm.go:884] updating cluster {Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:16:01.916448    2100 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:16:01.921541    2100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:16:01.954027    2100 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:16:01.954027    2100 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:16:01.954027    2100 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1216 06:16:01.954601    2100 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-686300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:16:01.957547    2100 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:16:02.032477    2100 cni.go:84] Creating CNI manager for ""
	I1216 06:16:02.032477    2100 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:16:02.032477    2100 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:16:02.032477    2100 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-686300 NodeName:no-preload-686300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:16:02.033125    2100 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-686300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:16:02.037302    2100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:16:02.050520    2100 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:16:02.055773    2100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:16:02.069144    2100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 06:16:02.087988    2100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:16:02.107159    2100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1216 06:16:02.131154    2100 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:16:02.138592    2100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:16:02.164109    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:16:02.316398    2100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:16:02.337534    2100 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300 for IP: 192.168.76.2
	I1216 06:16:02.337534    2100 certs.go:195] generating shared ca certs ...
	I1216 06:16:02.337534    2100 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:02.338569    2100 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:16:02.338569    2100 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:16:02.338569    2100 certs.go:257] generating profile certs ...
	I1216 06:16:02.339339    2100 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\client.key
	I1216 06:16:02.339930    2100 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.key.de5dcef0
	I1216 06:16:02.340107    2100 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.key
	I1216 06:16:02.340956    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:16:02.341198    2100 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:16:02.341261    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:16:02.341499    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:16:02.341684    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:16:02.341684    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:16:02.341684    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:16:02.343095    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:16:02.368546    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:16:02.399022    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:16:02.424980    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:16:02.453485    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:16:02.487356    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:16:02.515064    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:16:02.540749    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:16:02.565144    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:16:02.590623    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:16:02.617426    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:16:02.640948    2100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:16:02.664234    2100 ssh_runner.go:195] Run: openssl version
	I1216 06:16:02.677958    2100 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:16:02.693840    2100 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:16:02.709650    2100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:16:02.716131    2100 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:16:02.720662    2100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:16:02.770093    2100 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:16:02.786257    2100 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:02.804343    2100 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:16:02.820485    2100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:02.827160    2100 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:02.831640    2100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:02.879678    2100 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:16:02.895769    2100 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:16:02.914074    2100 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:16:02.931602    2100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:16:02.941222    2100 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:16:02.944922    2100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:16:02.993477    2100 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:16:03.010028    2100 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:16:03.022808    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:16:03.076221    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:16:03.132138    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:16:03.193108    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:16:03.250120    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:16:03.324424    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:16:03.378991    2100 kubeadm.go:401] StartCluster: {Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:16:03.383442    2100 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:16:03.426627    2100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:16:03.448420    2100 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:16:03.448441    2100 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:16:03.454343    2100 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:16:03.475733    2100 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:16:03.479687    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:03.530322    2100 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-686300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:16:03.531312    2100 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-686300" cluster setting kubeconfig missing "no-preload-686300" context setting]
	I1216 06:16:03.531312    2100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:03.554910    2100 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:16:03.568450    2100 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1216 06:16:03.568450    2100 kubeadm.go:602] duration metric: took 120.007ms to restartPrimaryControlPlane
	I1216 06:16:03.568450    2100 kubeadm.go:403] duration metric: took 189.4567ms to StartCluster
	I1216 06:16:03.568450    2100 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:03.569459    2100 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:16:03.570898    2100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:03.571666    2100 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:16:03.571666    2100 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:16:03.571666    2100 addons.go:70] Setting storage-provisioner=true in profile "no-preload-686300"
	I1216 06:16:03.571666    2100 addons.go:70] Setting dashboard=true in profile "no-preload-686300"
	I1216 06:16:03.571666    2100 addons.go:70] Setting default-storageclass=true in profile "no-preload-686300"
	I1216 06:16:03.571666    2100 addons.go:239] Setting addon dashboard=true in "no-preload-686300"
	I1216 06:16:03.571666    2100 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-686300"
	I1216 06:16:03.571666    2100 addons.go:239] Setting addon storage-provisioner=true in "no-preload-686300"
	W1216 06:16:03.571666    2100 addons.go:248] addon dashboard should already be in state true
	I1216 06:16:03.571666    2100 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:16:03.571666    2100 host.go:66] Checking if "no-preload-686300" exists ...
	I1216 06:16:03.571666    2100 host.go:66] Checking if "no-preload-686300" exists ...
	I1216 06:16:03.574673    2100 out.go:179] * Verifying Kubernetes components...
	I1216 06:16:03.582308    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:16:03.583195    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:16:03.583195    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:16:03.585220    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:16:03.654887    2100 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:16:03.654887    2100 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 06:16:03.657896    2100 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:16:03.655884    2100 addons.go:239] Setting addon default-storageclass=true in "no-preload-686300"
	I1216 06:16:03.657896    2100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:16:03.657896    2100 host.go:66] Checking if "no-preload-686300" exists ...
	I1216 06:16:03.661917    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:03.662889    2100 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 06:16:04.221088   11368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 06:16:04.221196   11368 kubeadm.go:319] 
	I1216 06:16:04.221440   11368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:16:04.223812   11368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:16:04.223812   11368 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:16:04.223812   11368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:16:04.223812   11368 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:16:04.224501   11368 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:16:04.224501   11368 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:16:04.224501   11368 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:16:04.224501   11368 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:16:04.224501   11368 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:16:04.225071   11368 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:16:04.225168   11368 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:16:04.225265   11368 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:16:04.225349   11368 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:16:04.225527   11368 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:16:04.225527   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:16:04.225527   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:16:04.225527   11368 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:16:04.226047   11368 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:16:04.226187   11368 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:16:04.226369   11368 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:16:04.226489   11368 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:16:04.226625   11368 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:16:04.226784   11368 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:16:04.226980   11368 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:16:04.226980   11368 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:16:04.226980   11368 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:16:04.226980   11368 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:16:04.226980   11368 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:16:04.227650   11368 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:16:04.227732   11368 kubeadm.go:319] OS: Linux
	I1216 06:16:04.227833   11368 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:16:04.228009   11368 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:16:04.228204   11368 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:16:04.228247   11368 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:16:04.228247   11368 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:16:04.228247   11368 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:16:04.228247   11368 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:16:04.228247   11368 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:16:04.228818   11368 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:16:04.229066   11368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:16:04.229179   11368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:16:04.229449   11368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:16:04.229583   11368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:16:04.232099   11368 out.go:252]   - Generating certificates and keys ...
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:16:04.233100   11368 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:16:04.234106   11368 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:16:04.234106   11368 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:16:04.234106   11368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:16:04.235106   11368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:16:04.235106   11368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:16:04.253780   11368 out.go:252]   - Booting up control plane ...
	I1216 06:16:04.254772   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:16:04.255084   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:16:04.255255   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:16:04.255540   11368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:16:04.255851   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:16:04.255940   11368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:16:04.255940   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:16:04.255940   11368 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:16:04.256639   11368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:16:04.256698   11368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:16:04.256698   11368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000618293s
	I1216 06:16:04.256698   11368 kubeadm.go:319] 
	I1216 06:16:04.256698   11368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:16:04.257241   11368 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:16:04.257505   11368 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:16:04.257554   11368 kubeadm.go:319] 
	I1216 06:16:04.257808   11368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:16:04.257951   11368 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:16:04.258007   11368 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:16:04.258072   11368 kubeadm.go:319] 
	I1216 06:16:04.258136   11368 kubeadm.go:403] duration metric: took 12m7.6565804s to StartCluster
	I1216 06:16:04.258244   11368 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:16:04.262878   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:16:04.325517   11368 cri.go:89] found id: ""
	I1216 06:16:04.325517   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.325517   11368 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:16:04.325517   11368 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:16:04.329515   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:16:04.383148   11368 cri.go:89] found id: ""
	I1216 06:16:04.384150   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.384150   11368 logs.go:284] No container was found matching "etcd"
	I1216 06:16:04.384150   11368 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:16:04.388147   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:16:04.431752   11368 cri.go:89] found id: ""
	I1216 06:16:04.432746   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.432746   11368 logs.go:284] No container was found matching "coredns"
	I1216 06:16:04.432746   11368 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:16:04.436778   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:16:04.480357   11368 cri.go:89] found id: ""
	I1216 06:16:04.480357   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.480357   11368 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:16:04.480357   11368 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:16:04.485913   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:16:04.537648   11368 cri.go:89] found id: ""
	I1216 06:16:04.537701   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.537701   11368 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:16:04.537701   11368 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:16:04.542506   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:16:04.583719   11368 cri.go:89] found id: ""
	I1216 06:16:04.583800   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.583800   11368 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:16:04.583800   11368 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:16:04.588223   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:16:04.633250   11368 cri.go:89] found id: ""
	I1216 06:16:04.633250   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.633322   11368 logs.go:284] No container was found matching "kindnet"
	I1216 06:16:04.633322   11368 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 06:16:04.637215   11368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 06:16:04.698406   11368 cri.go:89] found id: ""
	I1216 06:16:04.698406   11368 logs.go:282] 0 containers: []
	W1216 06:16:04.698406   11368 logs.go:284] No container was found matching "storage-provisioner"
	I1216 06:16:04.698406   11368 logs.go:123] Gathering logs for kubelet ...
	I1216 06:16:04.698406   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:16:04.766417   11368 logs.go:123] Gathering logs for dmesg ...
	I1216 06:16:04.766417   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:16:04.807190   11368 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:16:04.807190   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:16:04.905378   11368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:16:04.905378   11368 logs.go:123] Gathering logs for Docker ...
	I1216 06:16:04.905378   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:16:04.939415   11368 logs.go:123] Gathering logs for container status ...
	I1216 06:16:04.939415   11368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:16:04.988337   11368 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000618293s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:16:04.988337   11368 out.go:285] * 
	W1216 06:16:04.988337   11368 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000618293s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:16:04.988337   11368 out.go:285] * 
	W1216 06:16:04.992326   11368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:16:04.998327   11368 out.go:203] 
	W1216 06:16:05.003329   11368 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000618293s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:16:05.003329   11368 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:16:05.003329   11368 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:16:05.006336   11368 out.go:203] 
	I1216 06:16:03.665900    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 06:16:03.665900    2100 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 06:16:03.667896    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:16:03.671893    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:03.725618    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:16:03.726615    2100 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:16:03.726615    2100 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:16:03.728616    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:16:03.730622    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:03.778616    2100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:16:03.782622    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:16:03.806619    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:03.866065    2100 node_ready.go:35] waiting up to 6m0s for node "no-preload-686300" to be "Ready" ...
	I1216 06:16:03.887062    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:16:03.889063    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 06:16:03.889063    2100 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 06:16:03.915062    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 06:16:03.915062    2100 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 06:16:03.974753    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:16:03.984754    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 06:16:03.984754    2100 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 06:16:04.002751    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 06:16:04.002751    2100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 06:16:04.077037    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 06:16:04.077037    2100 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1216 06:16:04.097029    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.098040    2100 retry.go:31] will retry after 327.291867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.102038    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 06:16:04.102038    2100 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 06:16:04.162650    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 06:16:04.162730    2100 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1216 06:16:04.172452    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.172452    2100 retry.go:31] will retry after 162.955986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.190835    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 06:16:04.190835    2100 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 06:16:04.212428    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 06:16:04.212428    2100 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 06:16:04.242274    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:04.333523    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.333523    2100 retry.go:31] will retry after 306.565091ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.339511    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:04.426748    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.426748    2100 retry.go:31] will retry after 243.308048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.429746    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:04.513792    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.513854    2100 retry.go:31] will retry after 338.54175ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.645290    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 06:16:04.674409    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:04.731418    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.731418    2100 retry.go:31] will retry after 504.836716ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:16:04.761411    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.761411    2100 retry.go:31] will retry after 362.968297ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.857829    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:04.963423    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.963423    2100 retry.go:31] will retry after 692.98574ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.128838    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:05.236152    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.236152    2100 retry.go:31] will retry after 1.059819013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.242380    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:05.336959    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.336959    2100 retry.go:31] will retry after 651.301512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.661242    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:05.772466    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.772466    2100 retry.go:31] will retry after 1.028057258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.992090    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:06.105856    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:06.105856    2100 retry.go:31] will retry after 1.077072034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:06.301919    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:06.434927    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:06.434927    2100 retry.go:31] will retry after 1.819517425s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:06.807395    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:06.909747    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:06.909747    2100 retry.go:31] will retry after 1.116729418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:07.188680    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:07.304775    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:07.304775    2100 retry.go:31] will retry after 990.350055ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:08.031059    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:08.142079    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:08.142133    2100 retry.go:31] will retry after 2.44300328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:08.261149    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:16:08.302604    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:08.363926    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:08.363926    2100 retry.go:31] will retry after 1.04966539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:16:08.409917    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:08.409917    2100 retry.go:31] will retry after 1.423403129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:09.418423    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:09.503734    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:09.503734    2100 retry.go:31] will retry after 3.436079802s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:09.838732    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:09.928361    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:09.928361    2100 retry.go:31] will retry after 2.530734224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:10.590016    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:10.672848    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:10.672848    2100 retry.go:31] will retry after 2.162609718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:12.464950    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:12.556706    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:12.557281    2100 retry.go:31] will retry after 3.536450628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:12.840427    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:12.923546    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:12.923546    2100 retry.go:31] will retry after 3.393774227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:12.944564    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:13.023675    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:13.023675    2100 retry.go:31] will retry after 2.67208837s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	
	
	==> Docker <==
	Dec 16 06:03:47 kubernetes-upgrade-633300 systemd[1]: Starting docker.service - Docker Application Container Engine...
	Dec 16 06:03:47 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:47.949792479Z" level=info msg="Starting up"
	Dec 16 06:03:47 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:47.972059928Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	Dec 16 06:03:47 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:47.972401562Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	Dec 16 06:03:47 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:47.972474270Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	Dec 16 06:03:47 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:47.990111951Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Dec 16 06:03:48 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:48.005475002Z" level=info msg="Loading containers: start."
	Dec 16 06:03:48 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:48.008843443Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 16 06:03:54 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:54.488808979Z" level=info msg="Restoring containers: start."
	Dec 16 06:03:54 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:54.606838101Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 16 06:03:54 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:54.666776755Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 16 06:03:54 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:54.998831896Z" level=info msg="Loading containers: done."
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.029550398Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.029637507Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.029650808Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.029656309Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.029661610Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.029713915Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.029763720Z" level=info msg="Initializing buildkit"
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.135254375Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.143544613Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.143729731Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.143760134Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:03:55 kubernetes-upgrade-633300 dockerd[1440]: time="2025-12-16T06:03:55.143854144Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:03:55 kubernetes-upgrade-633300 systemd[1]: Started docker.service - Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000002] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000002] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.524882] CPU: 2 PID: 415390 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f861737bb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f861737baf6.
	[  +0.000001] RSP: 002b:00007fff8ec595c0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.754677] CPU: 9 PID: 415557 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f3622b31b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f3622b31af6.
	[  +0.000001] RSP: 002b:00007ffdd45883c0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:16:18 up  1:52,  0 user,  load average: 3.55, 3.93, 3.93
	Linux kubernetes-upgrade-633300 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:16:15 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:16:15 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 334.
	Dec 16 06:16:15 kubernetes-upgrade-633300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:16:15 kubernetes-upgrade-633300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:16:15 kubernetes-upgrade-633300 kubelet[26110]: E1216 06:16:15.904345   26110 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:16:15 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:16:15 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:16:16 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 335.
	Dec 16 06:16:16 kubernetes-upgrade-633300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:16:16 kubernetes-upgrade-633300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:16:16 kubernetes-upgrade-633300 kubelet[26122]: E1216 06:16:16.675081   26122 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:16:16 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:16:16 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:16:17 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 336.
	Dec 16 06:16:17 kubernetes-upgrade-633300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:16:17 kubernetes-upgrade-633300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:16:17 kubernetes-upgrade-633300 kubelet[26150]: E1216 06:16:17.411994   26150 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:16:17 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:16:17 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:16:18 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 337.
	Dec 16 06:16:18 kubernetes-upgrade-633300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:16:18 kubernetes-upgrade-633300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:16:18 kubernetes-upgrade-633300 kubelet[26254]: E1216 06:16:18.166145   26254 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:16:18 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:16:18 kubernetes-upgrade-633300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-633300 -n kubernetes-upgrade-633300
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-633300 -n kubernetes-upgrade-633300: exit status 2 (602.5998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-633300" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-633300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-633300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-633300: (2.9778913s)
--- FAIL: TestKubernetesUpgrade (833.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (532.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-686300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0
E1216 06:04:55.710395   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-686300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m49.4990063s)

                                                
                                                
-- stdout --
	* [no-preload-686300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "no-preload-686300" primary control-plane node in "no-preload-686300" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:04:52.444342    1840 out.go:360] Setting OutFile to fd 1876 ...
	I1216 06:04:52.489329    1840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:04:52.489329    1840 out.go:374] Setting ErrFile to fd 1268...
	I1216 06:04:52.489329    1840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:04:52.504227    1840 out.go:368] Setting JSON to false
	I1216 06:04:52.506480    1840 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6114,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:04:52.506480    1840 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:04:52.510479    1840 out.go:179] * [no-preload-686300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:04:52.515439    1840 notify.go:221] Checking for updates...
	I1216 06:04:52.518311    1840 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:04:52.521336    1840 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:04:52.523782    1840 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:04:52.526807    1840 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:04:52.529446    1840 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:04:52.532422    1840 config.go:182] Loaded profile config "kubernetes-upgrade-633300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:04:52.533253    1840 config.go:182] Loaded profile config "old-k8s-version-164300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1216 06:04:52.533253    1840 config.go:182] Loaded profile config "running-upgrade-826900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.0
	I1216 06:04:52.533253    1840 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:04:52.654450    1840 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:04:52.658621    1840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:04:52.896415    1840 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:04:52.877862768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:04:52.905051    1840 out.go:179] * Using the docker driver based on user configuration
	I1216 06:04:52.911343    1840 start.go:309] selected driver: docker
	I1216 06:04:52.911343    1840 start.go:927] validating driver "docker" against <nil>
	I1216 06:04:52.911343    1840 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:04:52.952078    1840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:04:53.181353    1840 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:04:53.162378913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:04:53.181353    1840 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:04:53.182351    1840 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:04:53.184352    1840 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:04:53.187349    1840 cni.go:84] Creating CNI manager for ""
	I1216 06:04:53.187349    1840 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:04:53.187349    1840 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 06:04:53.187349    1840 start.go:353] cluster config:
	{Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:04:53.190342    1840 out.go:179] * Starting "no-preload-686300" primary control-plane node in "no-preload-686300" cluster
	I1216 06:04:53.192343    1840 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:04:53.194342    1840 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:04:53.198340    1840 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:04:53.198340    1840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:04:53.198340    1840 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\config.json ...
	I1216 06:04:53.198340    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1216 06:04:53.198340    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1216 06:04:53.198340    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1216 06:04:53.198340    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1216 06:04:53.198340    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1216 06:04:53.198340    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1216 06:04:53.198340    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1216 06:04:53.198340    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1216 06:04:53.198340    1840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\config.json: {Name:mke1d682576aa40e39c404f3afa6bdc69c6c84ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:04:53.333380    1840 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:04:53.333380    1840 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:04:53.333380    1840 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:04:53.333380    1840 start.go:360] acquireMachinesLock for no-preload-686300: {Name:mk990048edb42dd06e1fb0f2c86d8b2d42a7457e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:04:53.333946    1840 start.go:364] duration metric: took 566µs to acquireMachinesLock for "no-preload-686300"
	I1216 06:04:53.334953    1840 start.go:93] Provisioning new machine with config: &{Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:04:53.334953    1840 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:04:53.342954    1840 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:04:53.343955    1840 start.go:159] libmachine.API.Create for "no-preload-686300" (driver="docker")
	I1216 06:04:53.343955    1840 client.go:173] LocalClient.Create starting
	I1216 06:04:53.343955    1840 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:04:53.345004    1840 main.go:143] libmachine: Decoding PEM data...
	I1216 06:04:53.345004    1840 main.go:143] libmachine: Parsing certificate...
	I1216 06:04:53.345546    1840 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:04:53.345590    1840 main.go:143] libmachine: Decoding PEM data...
	I1216 06:04:53.345590    1840 main.go:143] libmachine: Parsing certificate...
	I1216 06:04:53.352837    1840 cli_runner.go:164] Run: docker network inspect no-preload-686300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:04:53.573029    1840 cli_runner.go:211] docker network inspect no-preload-686300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:04:53.578040    1840 network_create.go:284] running [docker network inspect no-preload-686300] to gather additional debugging logs...
	I1216 06:04:53.578040    1840 cli_runner.go:164] Run: docker network inspect no-preload-686300
	W1216 06:04:53.643027    1840 cli_runner.go:211] docker network inspect no-preload-686300 returned with exit code 1
	I1216 06:04:53.643027    1840 network_create.go:287] error running [docker network inspect no-preload-686300]: docker network inspect no-preload-686300: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-686300 not found
	I1216 06:04:53.643027    1840 network_create.go:289] output of [docker network inspect no-preload-686300]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-686300 not found
	
	** /stderr **
	I1216 06:04:53.649031    1840 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:04:54.505429    1840 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:04:54.546056    1840 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:04:54.587327    1840 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001817aa0}
	I1216 06:04:54.587327    1840 network_create.go:124] attempt to create docker network no-preload-686300 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1216 06:04:54.592340    1840 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-686300 no-preload-686300
	W1216 06:04:55.033875    1840 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-686300 no-preload-686300 returned with exit code 1
	W1216 06:04:55.033875    1840 network_create.go:149] failed to create docker network no-preload-686300 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-686300 no-preload-686300: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1216 06:04:55.033875    1840 network_create.go:116] failed to create docker network no-preload-686300 192.168.67.0/24, will retry: subnet is taken
	I1216 06:04:55.062594    1840 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:04:55.083595    1840 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b77500}
	I1216 06:04:55.084589    1840 network_create.go:124] attempt to create docker network no-preload-686300 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 06:04:55.089587    1840 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-686300 no-preload-686300
	I1216 06:04:55.265297    1840 network_create.go:108] docker network no-preload-686300 192.168.76.0/24 created
	I1216 06:04:55.265297    1840 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-686300" container
	I1216 06:04:55.280594    1840 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:04:55.359460    1840 cli_runner.go:164] Run: docker volume create no-preload-686300 --label name.minikube.sigs.k8s.io=no-preload-686300 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:04:55.437104    1840 oci.go:103] Successfully created a docker volume no-preload-686300
	I1216 06:04:55.443693    1840 cli_runner.go:164] Run: docker run --rm --name no-preload-686300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-686300 --entrypoint /usr/bin/test -v no-preload-686300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:04:56.104544    1840 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:04:56.105350    1840 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 06:04:56.106818    1840 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:04:56.107486    1840 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 06:04:56.117233    1840 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:04:56.118107    1840 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1216 06:04:56.118301    1840 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.9199228s
	I1216 06:04:56.118301    1840 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1216 06:04:56.120329    1840 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 06:04:56.123960    1840 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 06:04:56.129967    1840 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:04:56.129967    1840 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1216 06:04:56.129967    1840 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.9315892s
	I1216 06:04:56.129967    1840 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1216 06:04:56.136961    1840 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:04:56.136961    1840 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 06:04:56.147955    1840 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 06:04:56.168964    1840 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:04:56.168964    1840 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 06:04:56.179983    1840 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	W1216 06:04:56.186953    1840 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1216 06:04:56.186953    1840 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:04:56.187960    1840 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 06:04:56.197966    1840 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 06:04:56.197966    1840 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:04:56.197966    1840 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1216 06:04:56.197966    1840 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.9995873s
	I1216 06:04:56.197966    1840 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	W1216 06:04:56.247154    1840 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1216 06:04:56.305788    1840 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1216 06:04:56.357575    1840 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1216 06:04:56.412444    1840 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1216 06:04:56.613932    1840 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1216 06:04:56.653466    1840 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1216 06:04:56.662859    1840 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1216 06:04:56.680211    1840 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1216 06:04:56.688718    1840 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1216 06:04:57.021300    1840 cli_runner.go:217] Completed: docker run --rm --name no-preload-686300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-686300 --entrypoint /usr/bin/test -v no-preload-686300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.5775345s)
	I1216 06:04:57.021300    1840 oci.go:107] Successfully prepared a docker volume no-preload-686300
	I1216 06:04:57.021937    1840 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:04:57.026596    1840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:04:57.288580    1840 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:04:57.258516189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:04:57.293394    1840 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:04:57.547108    1840 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-686300 --name no-preload-686300 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-686300 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-686300 --network no-preload-686300 --ip 192.168.76.2 --volume no-preload-686300:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:04:57.895299    1840 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1216 06:04:57.895533    1840 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 4.6971316s
	I1216 06:04:57.895623    1840 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1216 06:04:58.348783    1840 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1216 06:04:58.348783    1840 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 5.1503757s
	I1216 06:04:58.348783    1840 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1216 06:04:58.365767    1840 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Running}}
	I1216 06:04:58.427779    1840 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:04:58.428763    1840 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1216 06:04:58.428763    1840 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 5.2303553s
	I1216 06:04:58.428763    1840 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1216 06:04:58.496772    1840 cli_runner.go:164] Run: docker exec no-preload-686300 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:04:58.571780    1840 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1216 06:04:58.571780    1840 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 5.37337s
	I1216 06:04:58.571780    1840 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1216 06:04:58.605403    1840 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1216 06:04:58.605403    1840 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 5.4069927s
	I1216 06:04:58.605403    1840 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1216 06:04:58.605403    1840 cache.go:87] Successfully saved all images to host disk.
	I1216 06:04:58.616391    1840 oci.go:144] the created container "no-preload-686300" has a running status.
	I1216 06:04:58.616391    1840 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa...
	I1216 06:04:58.691378    1840 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:04:58.769457    1840 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:04:58.828440    1840 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:04:58.828440    1840 kic_runner.go:114] Args: [docker exec --privileged no-preload-686300 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:04:59.040946    1840 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa...
	I1216 06:05:01.198576    1840 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:05:01.260213    1840 machine.go:94] provisionDockerMachine start ...
	I1216 06:05:01.263230    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:01.336218    1840 main.go:143] libmachine: Using SSH client type: native
	I1216 06:05:01.354241    1840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54238 <nil> <nil>}
	I1216 06:05:01.354241    1840 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:05:01.537360    1840 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-686300
	
	I1216 06:05:01.537360    1840 ubuntu.go:182] provisioning hostname "no-preload-686300"
	I1216 06:05:01.541359    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:01.595358    1840 main.go:143] libmachine: Using SSH client type: native
	I1216 06:05:01.595358    1840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54238 <nil> <nil>}
	I1216 06:05:01.595358    1840 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-686300 && echo "no-preload-686300" | sudo tee /etc/hostname
	I1216 06:05:01.771411    1840 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-686300
	
	I1216 06:05:01.774410    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:01.826411    1840 main.go:143] libmachine: Using SSH client type: native
	I1216 06:05:01.827419    1840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54238 <nil> <nil>}
	I1216 06:05:01.827419    1840 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-686300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-686300/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-686300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:05:01.987817    1840 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:05:01.987817    1840 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:05:01.987817    1840 ubuntu.go:190] setting up certificates
	I1216 06:05:01.988343    1840 provision.go:84] configureAuth start
	I1216 06:05:01.991973    1840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-686300
	I1216 06:05:02.042838    1840 provision.go:143] copyHostCerts
	I1216 06:05:02.042838    1840 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:05:02.042838    1840 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:05:02.042838    1840 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:05:02.043828    1840 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:05:02.043828    1840 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:05:02.043828    1840 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:05:02.044831    1840 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:05:02.044831    1840 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:05:02.044831    1840 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:05:02.045828    1840 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-686300 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-686300]
	I1216 06:05:02.230083    1840 provision.go:177] copyRemoteCerts
	I1216 06:05:02.234078    1840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:05:02.236233    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:02.295784    1840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54238 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:05:02.413705    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:05:02.444663    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:05:02.474495    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:05:02.499973    1840 provision.go:87] duration metric: took 511.5132ms to configureAuth
	I1216 06:05:02.499973    1840 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:05:02.500497    1840 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:05:02.503892    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:02.565673    1840 main.go:143] libmachine: Using SSH client type: native
	I1216 06:05:02.566366    1840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54238 <nil> <nil>}
	I1216 06:05:02.566410    1840 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:05:02.729480    1840 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:05:02.729480    1840 ubuntu.go:71] root file system type: overlay
	I1216 06:05:02.729480    1840 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:05:02.733028    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:02.787224    1840 main.go:143] libmachine: Using SSH client type: native
	I1216 06:05:02.787441    1840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54238 <nil> <nil>}
	I1216 06:05:02.787441    1840 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:05:02.963318    1840 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:05:02.966917    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:03.022225    1840 main.go:143] libmachine: Using SSH client type: native
	I1216 06:05:03.022803    1840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54238 <nil> <nil>}
	I1216 06:05:03.022803    1840 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:05:04.432964    1840 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:05:02.950515505 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:05:04.432964    1840 machine.go:97] duration metric: took 3.1727104s to provisionDockerMachine
	I1216 06:05:04.432964    1840 client.go:176] duration metric: took 11.0888653s to LocalClient.Create
	I1216 06:05:04.432964    1840 start.go:167] duration metric: took 11.0888653s to libmachine.API.Create "no-preload-686300"
	I1216 06:05:04.432964    1840 start.go:293] postStartSetup for "no-preload-686300" (driver="docker")
	I1216 06:05:04.432964    1840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:05:04.438970    1840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:05:04.441965    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:04.495909    1840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54238 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:05:04.619853    1840 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:05:04.629579    1840 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:05:04.629579    1840 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:05:04.629579    1840 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:05:04.629579    1840 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:05:04.630146    1840 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:05:04.634843    1840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:05:04.648678    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:05:04.680557    1840 start.go:296] duration metric: took 247.5894ms for postStartSetup
	I1216 06:05:04.686976    1840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-686300
	I1216 06:05:04.740818    1840 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\config.json ...
	I1216 06:05:04.746416    1840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:05:04.749339    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:04.822051    1840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54238 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:05:04.946057    1840 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:05:04.957063    1840 start.go:128] duration metric: took 11.6219587s to createHost
	I1216 06:05:04.957063    1840 start.go:83] releasing machines lock for "no-preload-686300", held for 11.6229659s
	I1216 06:05:04.962059    1840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-686300
	I1216 06:05:05.037806    1840 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:05:05.042801    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:05.042801    1840 ssh_runner.go:195] Run: cat /version.json
	I1216 06:05:05.046824    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:05.107793    1840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54238 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:05:05.109792    1840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54238 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	W1216 06:05:05.230797    1840 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:05:05.255811    1840 ssh_runner.go:195] Run: systemctl --version
	I1216 06:05:05.280805    1840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:05:05.288795    1840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:05:05.293805    1840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W1216 06:05:05.343802    1840 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:05:05.343802    1840 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:05:05.346806    1840 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:05:05.346806    1840 start.go:496] detecting cgroup driver to use...
	I1216 06:05:05.346806    1840 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:05:05.346806    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:05:05.374821    1840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 06:05:05.395816    1840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:05:05.411798    1840 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:05:05.416806    1840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:05:05.441805    1840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:05:05.463800    1840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:05:05.484800    1840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:05:05.504806    1840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:05:05.526801    1840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:05:05.547815    1840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:05:05.577804    1840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:05:05.599806    1840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:05:05.615801    1840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:05:05.638802    1840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:05:05.820397    1840 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:05:06.030398    1840 start.go:496] detecting cgroup driver to use...
	I1216 06:05:06.030398    1840 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:05:06.035397    1840 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:05:06.060403    1840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:05:06.084398    1840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:05:06.162396    1840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:05:06.187399    1840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:05:06.212401    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:05:06.239401    1840 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:05:06.251405    1840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:05:06.266082    1840 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 06:05:06.292260    1840 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:05:06.480282    1840 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:05:06.635381    1840 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:05:06.635381    1840 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:05:06.659368    1840 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:05:06.682391    1840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:05:06.846371    1840 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:05:08.593162    1840 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.7467675s)
	I1216 06:05:08.598936    1840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:05:08.621362    1840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:05:08.645379    1840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:05:08.671370    1840 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:05:08.830426    1840 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:05:08.977798    1840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:05:09.131871    1840 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:05:09.159032    1840 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:05:09.179959    1840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:05:09.313555    1840 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:05:09.421210    1840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:05:09.440868    1840 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:05:09.445644    1840 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:05:09.452947    1840 start.go:564] Will wait 60s for crictl version
	I1216 06:05:09.457946    1840 ssh_runner.go:195] Run: which crictl
	I1216 06:05:09.471444    1840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:05:09.512879    1840 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:05:09.516858    1840 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:05:09.562383    1840 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:05:09.603875    1840 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 06:05:09.606924    1840 cli_runner.go:164] Run: docker exec -t no-preload-686300 dig +short host.docker.internal
	I1216 06:05:09.751082    1840 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:05:09.755537    1840 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:05:09.765883    1840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:05:09.784002    1840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:05:09.837982    1840 kubeadm.go:884] updating cluster {Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:05:09.837982    1840 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:05:09.840994    1840 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:05:09.872791    1840 docker.go:691] Got preloaded images: 
	I1216 06:05:09.872791    1840 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1216 06:05:09.872791    1840 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 06:05:09.887161    1840 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:05:09.891564    1840 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 06:05:09.895573    1840 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1216 06:05:09.897550    1840 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:05:09.899578    1840 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 06:05:09.902570    1840 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 06:05:09.903551    1840 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 06:05:09.906564    1840 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1216 06:05:09.907563    1840 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 06:05:09.909549    1840 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 06:05:09.913554    1840 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1216 06:05:09.914557    1840 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 06:05:09.916564    1840 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 06:05:09.918553    1840 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 06:05:09.921561    1840 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1216 06:05:09.925554    1840 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	W1216 06:05:09.955555    1840 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1216 06:05:10.004550    1840 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1216 06:05:10.053550    1840 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1216 06:05:10.101550    1840 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1216 06:05:10.148554    1840 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1216 06:05:10.203001    1840 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1216 06:05:10.225997    1840 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	W1216 06:05:10.256995    1840 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1216 06:05:10.257997    1840 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1216 06:05:10.257997    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1216 06:05:10.257997    1840 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 06:05:10.262992    1840 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1216 06:05:10.266016    1840 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1216 06:05:10.297003    1840 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1216 06:05:10.297993    1840 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1216 06:05:10.299004    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1216 06:05:10.299004    1840 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1216 06:05:10.302000    1840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1216 06:05:10.302996    1840 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1216 06:05:10.309002    1840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1216 06:05:10.309002    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	W1216 06:05:10.316990    1840 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1216 06:05:10.327005    1840 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 06:05:10.351994    1840 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1216 06:05:10.355989    1840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1216 06:05:10.371089    1840 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 06:05:10.420559    1840 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1216 06:05:10.420559    1840 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1216 06:05:10.420681    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1216 06:05:10.420725    1840 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 06:05:10.420840    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1216 06:05:10.424752    1840 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1216 06:05:10.425761    1840 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1216 06:05:10.439499    1840 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1216 06:05:10.439499    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1216 06:05:10.439499    1840 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 06:05:10.444511    1840 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1216 06:05:10.476493    1840 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1216 06:05:10.521216    1840 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1216 06:05:10.521216    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1216 06:05:10.529497    1840 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1216 06:05:10.529497    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1216 06:05:10.529497    1840 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1216 06:05:10.529497    1840 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1216 06:05:10.535493    1840 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1216 06:05:10.538491    1840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1216 06:05:10.541494    1840 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 06:05:10.560499    1840 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1216 06:05:10.567495    1840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 06:05:10.631955    1840 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:05:10.769034    1840 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1216 06:05:10.769034    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1216 06:05:10.769034    1840 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1216 06:05:10.769034    1840 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1216 06:05:10.769034    1840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1216 06:05:10.769034    1840 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1216 06:05:10.769034    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1216 06:05:10.769034    1840 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1216 06:05:10.769034    1840 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 06:05:10.769034    1840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1216 06:05:10.769034    1840 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1216 06:05:10.769034    1840 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1216 06:05:10.769034    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1216 06:05:10.769034    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1216 06:05:10.769034    1840 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:05:10.774998    1840 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1216 06:05:10.775997    1840 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:05:10.775997    1840 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1216 06:05:10.776999    1840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1216 06:05:10.835947    1840 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1216 06:05:10.835947    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load"
	I1216 06:05:10.847376    1840 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1216 06:05:10.853377    1840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1216 06:05:10.858377    1840 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1216 06:05:10.858377    1840 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1216 06:05:10.858377    1840 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1216 06:05:10.858377    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1216 06:05:10.864367    1840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 06:05:10.864367    1840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 06:05:16.568145    1840 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load": (5.7320813s)
	I1216 06:05:16.568203    1840 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 from cache
	I1216 06:05:16.568242    1840 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1216 06:05:16.568295    1840 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (5.7038533s)
	I1216 06:05:16.568295    1840 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (5.7038533s)
	I1216 06:05:16.568295    1840 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1216 06:05:16.568295    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load"
	I1216 06:05:16.568295    1840 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1216 06:05:16.568497    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1216 06:05:16.568242    1840 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: (5.7146933s)
	I1216 06:05:16.568629    1840 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1216 06:05:16.568629    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1216 06:05:16.568799    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1216 06:05:19.166253    1840 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load": (2.5977766s)
	I1216 06:05:19.166253    1840 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 from cache
	I1216 06:05:19.166253    1840 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1216 06:05:19.166253    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load"
	I1216 06:05:21.829178    1840 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load": (2.6628897s)
	I1216 06:05:21.829178    1840 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 from cache
	I1216 06:05:21.829178    1840 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1216 06:05:21.829178    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load"
	I1216 06:05:23.420686    1840 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load": (1.5914869s)
	I1216 06:05:23.420686    1840 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 from cache
	I1216 06:05:23.420686    1840 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 06:05:23.420686    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1216 06:05:24.054294    1840 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1216 06:05:24.054294    1840 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1216 06:05:24.054294    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1216 06:05:30.418160    1840 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (6.3637822s)
	I1216 06:05:30.418160    1840 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1216 06:05:30.418160    1840 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1216 06:05:30.418160    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load"
	I1216 06:05:32.146228    1840 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load": (1.7272575s)
	I1216 06:05:32.146228    1840 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 from cache
	I1216 06:05:32.146228    1840 cache_images.go:125] Successfully loaded all cached images
	I1216 06:05:32.146228    1840 cache_images.go:94] duration metric: took 22.2731456s to LoadCachedImages
	I1216 06:05:32.146228    1840 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1216 06:05:32.146228    1840 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-686300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:05:32.149226    1840 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:05:32.234488    1840 cni.go:84] Creating CNI manager for ""
	I1216 06:05:32.234488    1840 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:05:32.234488    1840 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:05:32.234488    1840 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-686300 NodeName:no-preload-686300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:05:32.234488    1840 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-686300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:05:32.240434    1840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:05:32.254445    1840 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1216 06:05:32.261291    1840 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:05:32.277251    1840 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl
	I1216 06:05:32.277334    1840 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm
	I1216 06:05:32.277334    1840 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet
	I1216 06:05:33.428754    1840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1216 06:05:33.440651    1840 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1216 06:05:33.440651    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1216 06:05:33.496898    1840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:05:33.568297    1840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1216 06:05:33.612290    1840 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1216 06:05:33.612290    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1216 06:05:33.614288    1840 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1216 06:05:33.656286    1840 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1216 06:05:33.656286    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1216 06:05:35.362220    1840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:05:35.374217    1840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 06:05:35.393217    1840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:05:35.414636    1840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1216 06:05:35.450385    1840 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:05:35.462159    1840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:05:35.482772    1840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:05:35.665259    1840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:05:35.688511    1840 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300 for IP: 192.168.76.2
	I1216 06:05:35.688564    1840 certs.go:195] generating shared ca certs ...
	I1216 06:05:35.688564    1840 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:05:35.689218    1840 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:05:35.689669    1840 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:05:35.689824    1840 certs.go:257] generating profile certs ...
	I1216 06:05:35.690321    1840 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\client.key
	I1216 06:05:35.690480    1840 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\client.crt with IP's: []
	I1216 06:05:35.811320    1840 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\client.crt ...
	I1216 06:05:35.811320    1840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\client.crt: {Name:mkab73f30f3dcc61c199629ffca9432419031250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:05:35.812331    1840 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\client.key ...
	I1216 06:05:35.812331    1840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\client.key: {Name:mk7e98c16323513004a85e5618b48a3c8df50b55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:05:35.813328    1840 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.key.de5dcef0
	I1216 06:05:35.813328    1840 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.crt.de5dcef0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1216 06:05:36.030182    1840 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.crt.de5dcef0 ...
	I1216 06:05:36.030182    1840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.crt.de5dcef0: {Name:mk8deb171d0a14c0698c7a5e91c1688cf6d8a02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:05:36.031250    1840 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.key.de5dcef0 ...
	I1216 06:05:36.031250    1840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.key.de5dcef0: {Name:mk5159c7be38db88d298bcda754c1516dfd892fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:05:36.032131    1840 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.crt.de5dcef0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.crt
	I1216 06:05:36.048215    1840 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.key.de5dcef0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.key
	I1216 06:05:36.050217    1840 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.key
	I1216 06:05:36.050217    1840 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.crt with IP's: []
	I1216 06:05:36.093227    1840 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.crt ...
	I1216 06:05:36.093227    1840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.crt: {Name:mkabf2cdc3bd616f4e298d950888ea6eb95c0fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:05:36.094247    1840 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.key ...
	I1216 06:05:36.094247    1840 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.key: {Name:mk00acf75ed3c4bbd124c243668919a9309c681f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:05:36.109788    1840 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:05:36.110788    1840 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:05:36.110788    1840 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:05:36.110788    1840 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:05:36.110788    1840 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:05:36.110788    1840 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:05:36.111785    1840 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:05:36.112788    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:05:36.138792    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:05:36.162789    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:05:36.195573    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:05:36.236764    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:05:36.266385    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:05:36.293389    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:05:36.321514    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:05:36.350524    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:05:36.379810    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:05:36.409333    1840 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:05:36.443615    1840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:05:36.477352    1840 ssh_runner.go:195] Run: openssl version
	I1216 06:05:36.495455    1840 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:05:36.515433    1840 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:05:36.531436    1840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:05:36.539441    1840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:05:36.543435    1840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:05:36.596157    1840 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:05:36.618359    1840 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:05:36.634530    1840 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:05:36.660857    1840 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:05:36.685113    1840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:05:36.692680    1840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:05:36.697689    1840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:05:36.755686    1840 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:05:36.771684    1840 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:05:36.786694    1840 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:05:36.804497    1840 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:05:36.826160    1840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:05:36.836977    1840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:05:36.843094    1840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:05:36.904313    1840 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:05:36.924719    1840 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:05:36.941584    1840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:05:36.949344    1840 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:05:36.949492    1840 kubeadm.go:401] StartCluster: {Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:05:36.953489    1840 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:05:36.984418    1840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:05:37.002085    1840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:05:37.014124    1840 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:05:37.018666    1840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:05:37.032818    1840 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:05:37.032818    1840 kubeadm.go:158] found existing configuration files:
	
	I1216 06:05:37.037355    1840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:05:37.052847    1840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:05:37.056703    1840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:05:37.076412    1840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:05:37.089087    1840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:05:37.094136    1840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:05:37.114791    1840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:05:37.127024    1840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:05:37.131563    1840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:05:37.150392    1840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:05:37.161537    1840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:05:37.165534    1840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:05:37.181529    1840 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:05:37.294337    1840 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:05:37.372961    1840 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:05:37.479716    1840 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:09:39.387359    1840 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 06:09:39.387359    1840 kubeadm.go:319] 
	I1216 06:09:39.387959    1840 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:09:39.392405    1840 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:09:39.392405    1840 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:09:39.392405    1840 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:09:39.392405    1840 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:09:39.392405    1840 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:09:39.393407    1840 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:09:39.393407    1840 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:09:39.393407    1840 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:09:39.393407    1840 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:09:39.393407    1840 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:09:39.393407    1840 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:09:39.393407    1840 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:09:39.393407    1840 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:09:39.393407    1840 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:09:39.394411    1840 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:09:39.394411    1840 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:09:39.394411    1840 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:09:39.394411    1840 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:09:39.394411    1840 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:09:39.394411    1840 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:09:39.394411    1840 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:09:39.395410    1840 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:09:39.395410    1840 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:09:39.395410    1840 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:09:39.395410    1840 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:09:39.395410    1840 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:09:39.395410    1840 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:09:39.395410    1840 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:09:39.395410    1840 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:09:39.396403    1840 kubeadm.go:319] OS: Linux
	I1216 06:09:39.396403    1840 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:09:39.396403    1840 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:09:39.396403    1840 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:09:39.396403    1840 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:09:39.396403    1840 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:09:39.396403    1840 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:09:39.396403    1840 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:09:39.397414    1840 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:09:39.397414    1840 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:09:39.397414    1840 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:09:39.397414    1840 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:09:39.397414    1840 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:09:39.398428    1840 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:09:39.402403    1840 out.go:252]   - Generating certificates and keys ...
	I1216 06:09:39.402403    1840 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:09:39.402403    1840 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:09:39.402403    1840 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:09:39.403412    1840 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:09:39.403412    1840 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:09:39.403412    1840 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:09:39.403412    1840 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:09:39.403412    1840 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-686300] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 06:09:39.403412    1840 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:09:39.403412    1840 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-686300] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1216 06:09:39.404403    1840 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:09:39.404403    1840 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:09:39.404403    1840 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:09:39.404403    1840 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:09:39.404403    1840 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:09:39.404403    1840 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:09:39.404403    1840 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:09:39.405410    1840 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:09:39.405410    1840 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:09:39.405410    1840 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:09:39.405410    1840 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:09:39.408406    1840 out.go:252]   - Booting up control plane ...
	I1216 06:09:39.408406    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:09:39.408406    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:09:39.409410    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:09:39.409410    1840 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:09:39.409410    1840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:09:39.409410    1840 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:09:39.409410    1840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:09:39.410410    1840 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:09:39.410410    1840 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:09:39.410410    1840 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:09:39.410410    1840 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000380357s
	I1216 06:09:39.410410    1840 kubeadm.go:319] 
	I1216 06:09:39.410410    1840 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:09:39.410410    1840 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:09:39.411410    1840 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:09:39.411410    1840 kubeadm.go:319] 
	I1216 06:09:39.411410    1840 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:09:39.411410    1840 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:09:39.411410    1840 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:09:39.411410    1840 kubeadm.go:319] 
	W1216 06:09:39.411410    1840 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-686300] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-686300] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000380357s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-686300] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-686300] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000380357s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:09:39.416403    1840 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 06:09:39.878513    1840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:09:39.896223    1840 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:09:39.900582    1840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:09:39.914756    1840 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:09:39.914814    1840 kubeadm.go:158] found existing configuration files:
	
	I1216 06:09:39.919378    1840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:09:39.935596    1840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:09:39.939569    1840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:09:39.956625    1840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:09:39.967932    1840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:09:39.972246    1840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:09:39.988213    1840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:09:40.001909    1840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:09:40.006052    1840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:09:40.023802    1840 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:09:40.036595    1840 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:09:40.041595    1840 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:09:40.072304    1840 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:09:40.207535    1840 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:09:40.294652    1840 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:09:40.390588    1840 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:13:41.144775    1840 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:13:41.144775    1840 kubeadm.go:319] 
	I1216 06:13:41.144775    1840 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:13:41.148846    1840 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:13:41.149531    1840 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:13:41.149956    1840 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:13:41.150211    1840 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:13:41.150759    1840 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:13:41.150889    1840 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:13:41.151079    1840 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:13:41.151275    1840 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:13:41.151526    1840 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:13:41.151790    1840 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:13:41.153311    1840 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:13:41.153615    1840 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:13:41.153787    1840 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:13:41.154024    1840 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] OS: Linux
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:13:41.154727    1840 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:13:41.155306    1840 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:13:41.156052    1840 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:13:41.158898    1840 out.go:252]   - Generating certificates and keys ...
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:13:41.159722    1840 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:13:41.159918    1840 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:13:41.160046    1840 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:13:41.160705    1840 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:13:41.160782    1840 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:13:41.160887    1840 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:13:41.161622    1840 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:13:41.161622    1840 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:13:41.164114    1840 out.go:252]   - Booting up control plane ...
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:13:41.166093    1840 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:13:41.166093    1840 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:13:41.166093    1840 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000506958s
	I1216 06:13:41.166093    1840 kubeadm.go:319] 
	I1216 06:13:41.166093    1840 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:13:41.166093    1840 kubeadm.go:319] 
	I1216 06:13:41.166093    1840 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:13:41.167095    1840 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:13:41.167095    1840 kubeadm.go:319] 
	I1216 06:13:41.167095    1840 kubeadm.go:403] duration metric: took 8m4.2111844s to StartCluster
	I1216 06:13:41.167095    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:13:41.170749    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:13:41.232071    1840 cri.go:89] found id: ""
	I1216 06:13:41.232103    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.232153    1840 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:13:41.232153    1840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:13:41.237864    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:13:41.286666    1840 cri.go:89] found id: ""
	I1216 06:13:41.286666    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.286666    1840 logs.go:284] No container was found matching "etcd"
	I1216 06:13:41.286666    1840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:13:41.291424    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:13:41.333354    1840 cri.go:89] found id: ""
	I1216 06:13:41.333354    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.333354    1840 logs.go:284] No container was found matching "coredns"
	I1216 06:13:41.333354    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:13:41.337361    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:13:41.379362    1840 cri.go:89] found id: ""
	I1216 06:13:41.379362    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.379362    1840 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:13:41.379362    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:13:41.383354    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:13:41.434935    1840 cri.go:89] found id: ""
	I1216 06:13:41.434935    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.434935    1840 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:13:41.434935    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:13:41.438925    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:13:41.481929    1840 cri.go:89] found id: ""
	I1216 06:13:41.481929    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.481929    1840 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:13:41.481929    1840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:13:41.485920    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:13:41.530524    1840 cri.go:89] found id: ""
	I1216 06:13:41.530614    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.530614    1840 logs.go:284] No container was found matching "kindnet"
	I1216 06:13:41.530666    1840 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:13:41.530666    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:13:41.626225    1840 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:13:41.619073   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.620556   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.621534   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.622645   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.623449   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:13:41.619073   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.620556   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.621534   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.622645   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.623449   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:13:41.626225    1840 logs.go:123] Gathering logs for Docker ...
	I1216 06:13:41.626225    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:13:41.658338    1840 logs.go:123] Gathering logs for container status ...
	I1216 06:13:41.658338    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:13:41.703328    1840 logs.go:123] Gathering logs for kubelet ...
	I1216 06:13:41.703328    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:13:41.762322    1840 logs.go:123] Gathering logs for dmesg ...
	I1216 06:13:41.762322    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 06:13:41.799388    1840 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:13:41.799388    1840 out.go:285] * 
	* 
	W1216 06:13:41.799388    1840 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:13:41.799388    1840 out.go:285] * 
	* 
	W1216 06:13:41.801787    1840 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:13:41.811220    1840 out.go:203] 
	W1216 06:13:41.815157    1840 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:13:41.815157    1840 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:13:41.815157    1840 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:13:41.817851    1840 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-686300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-686300
helpers_test.go:244: (dbg) docker inspect no-preload-686300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc",
	        "Created": "2025-12-16T06:04:57.603416998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320341,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:04:57.945459203Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hosts",
	        "LogPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc-json.log",
	        "Name": "/no-preload-686300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-686300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-686300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-686300",
	                "Source": "/var/lib/docker/volumes/no-preload-686300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-686300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-686300",
	                "name.minikube.sigs.k8s.io": "no-preload-686300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9eaf22c59ece58cc41ccdd6b1ffbec9338fd4c996e850e9f23f89cd055f1d4e3",
	            "SandboxKey": "/var/run/docker/netns/9eaf22c59ece",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54238"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54239"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54240"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54241"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54242"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-686300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0c88c2c552749da4ec31002e488ccc5e8184c9ce7ba360033e35a2aa2c5aead9",
	                    "EndpointID": "c09b65cdfb104f0ebd3eca48e5283746dc009186edbfa5d2e23372c6159c69c6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-686300",
	                        "f98924d46fbc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300: exit status 6 (603.2548ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:13:42.914614    9640 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-686300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25: (1.1433412s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                      │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-030800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo cri-dockerd --version                                                                      │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo systemctl status containerd --all --full --no-pager                                        │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo systemctl cat containerd --no-pager                                                        │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo cat /lib/systemd/system/containerd.service                                                 │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo cat /etc/containerd/config.toml                                                            │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo containerd config dump                                                                     │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo systemctl status crio --all --full --no-pager                                              │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │                     │
	│ ssh     │ -p auto-030800 sudo systemctl cat crio --no-pager                                                              │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo crio config                                                                                │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ delete  │ -p auto-030800                                                                                                 │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ start   │ -p kindnet-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 pgrep -a kubelet                                                                             │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/nsswitch.conf                                                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/hosts                                                                          │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/resolv.conf                                                                    │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo crictl pods                                                                             │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo crictl ps --all                                                                         │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo ip a s                                                                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo ip r s                                                                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo iptables-save                                                                           │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo iptables -t nat -L -n -v                                                                │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:11:49
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:11:49.340795    6788 out.go:360] Setting OutFile to fd 1712 ...
	I1216 06:11:49.386344    6788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:11:49.386344    6788 out.go:374] Setting ErrFile to fd 1196...
	I1216 06:11:49.386390    6788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:11:49.401091    6788 out.go:368] Setting JSON to false
	I1216 06:11:49.404855    6788 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6531,"bootTime":1765858978,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:11:49.405055    6788 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:11:49.408997    6788 out.go:179] * [kindnet-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:11:49.412763    6788 notify.go:221] Checking for updates...
	I1216 06:11:49.414957    6788 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:11:49.416858    6788 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:11:49.419397    6788 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:11:49.421529    6788 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:11:49.423543    6788 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:11:49.426393    6788 config.go:182] Loaded profile config "kubernetes-upgrade-633300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:11:49.427388    6788 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:11:49.427640    6788 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:11:49.428138    6788 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:11:49.549056    6788 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:11:49.552567    6788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:11:49.779179    6788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:11:49.756494835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:11:49.782904    6788 out.go:179] * Using the docker driver based on user configuration
	I1216 06:11:49.786690    6788 start.go:309] selected driver: docker
	I1216 06:11:49.786719    6788 start.go:927] validating driver "docker" against <nil>
	I1216 06:11:49.786755    6788 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:11:49.871381    6788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:11:50.104061    6788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:11:50.077311907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:11:50.105056    6788 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:11:50.105056    6788 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:11:50.108056    6788 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:11:50.110058    6788 cni.go:84] Creating CNI manager for "kindnet"
	I1216 06:11:50.110058    6788 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 06:11:50.110058    6788 start.go:353] cluster config:
	{Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:11:50.112053    6788 out.go:179] * Starting "kindnet-030800" primary control-plane node in "kindnet-030800" cluster
	I1216 06:11:50.115067    6788 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:11:50.118075    6788 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:11:50.120078    6788 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:11:50.120078    6788 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:11:50.120078    6788 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:11:50.120078    6788 cache.go:65] Caching tarball of preloaded images
	I1216 06:11:50.120078    6788 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:11:50.121072    6788 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:11:50.121072    6788 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\config.json ...
	I1216 06:11:50.121072    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\config.json: {Name:mkebea825fd6dc6adf01534f5a4bb9848abba58a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:11:50.198067    6788 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:11:50.198067    6788 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:11:50.198067    6788 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:11:50.198067    6788 start.go:360] acquireMachinesLock for kindnet-030800: {Name:mk13b4d023e9ef7970ce337d36b9fc70162bc2d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:11:50.198067    6788 start.go:364] duration metric: took 0s to acquireMachinesLock for "kindnet-030800"
	I1216 06:11:50.198067    6788 start.go:93] Provisioning new machine with config: &{Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:11:50.199067    6788 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:11:50.202064    6788 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:11:50.202064    6788 start.go:159] libmachine.API.Create for "kindnet-030800" (driver="docker")
	I1216 06:11:50.202064    6788 client.go:173] LocalClient.Create starting
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Decoding PEM data...
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Parsing certificate...
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Decoding PEM data...
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Parsing certificate...
	I1216 06:11:50.208057    6788 cli_runner.go:164] Run: docker network inspect kindnet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:11:50.256055    6788 cli_runner.go:211] docker network inspect kindnet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:11:50.259055    6788 network_create.go:284] running [docker network inspect kindnet-030800] to gather additional debugging logs...
	I1216 06:11:50.259055    6788 cli_runner.go:164] Run: docker network inspect kindnet-030800
	W1216 06:11:50.314050    6788 cli_runner.go:211] docker network inspect kindnet-030800 returned with exit code 1
	I1216 06:11:50.314050    6788 network_create.go:287] error running [docker network inspect kindnet-030800]: docker network inspect kindnet-030800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-030800 not found
	I1216 06:11:50.314050    6788 network_create.go:289] output of [docker network inspect kindnet-030800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-030800 not found
	
	** /stderr **
	I1216 06:11:50.318205    6788 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:11:50.407244    6788 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.423243    6788 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.439260    6788 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.454418    6788 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.470404    6788 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.485782    6788 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.499864    6788 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001585680}
	I1216 06:11:50.499864    6788 network_create.go:124] attempt to create docker network kindnet-030800 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1216 06:11:50.504590    6788 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-030800 kindnet-030800
	I1216 06:11:50.647049    6788 network_create.go:108] docker network kindnet-030800 192.168.103.0/24 created
	I1216 06:11:50.647049    6788 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-030800" container
	I1216 06:11:50.655126    6788 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:11:50.718220    6788 cli_runner.go:164] Run: docker volume create kindnet-030800 --label name.minikube.sigs.k8s.io=kindnet-030800 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:11:50.775893    6788 oci.go:103] Successfully created a docker volume kindnet-030800
	I1216 06:11:50.779320    6788 cli_runner.go:164] Run: docker run --rm --name kindnet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-030800 --entrypoint /usr/bin/test -v kindnet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:11:52.174069    6788 cli_runner.go:217] Completed: docker run --rm --name kindnet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-030800 --entrypoint /usr/bin/test -v kindnet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.3947303s)
	I1216 06:11:52.174069    6788 oci.go:107] Successfully prepared a docker volume kindnet-030800
	I1216 06:11:52.174069    6788 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:11:52.174069    6788 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:11:52.177694    6788 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:12:02.114874   11368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:12:02.115036   11368 kubeadm.go:319] 
	I1216 06:12:02.115323   11368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:12:02.119332   11368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:12:02.119332   11368 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:12:02.120135   11368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:12:02.120135   11368 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:12:02.120135   11368 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:12:02.120871   11368 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:12:02.121013   11368 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:12:02.121192   11368 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:12:02.122017   11368 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:12:02.122194   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:12:02.122408   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:12:02.122510   11368 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:12:02.122753   11368 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:12:02.122840   11368 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:12:02.123033   11368 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:12:02.123163   11368 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:12:02.123310   11368 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:12:02.123421   11368 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:12:02.123572   11368 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:12:02.123980   11368 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:12:02.124094   11368 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] OS: Linux
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:12:02.124933   11368 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:12:02.125112   11368 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:12:02.125304   11368 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:12:02.125449   11368 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:12:02.125567   11368 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:12:02.125730   11368 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:12:02.125853   11368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:12:02.125853   11368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:12:02.126387   11368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:12:02.126558   11368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:12:02.407594   11368 out.go:252]   - Generating certificates and keys ...
	I1216 06:12:02.407968   11368 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:12:02.408113   11368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:12:02.408288   11368 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:12:02.408453   11368 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:12:02.408673   11368 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:12:02.408815   11368 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:12:02.408921   11368 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:12:02.409054   11368 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:12:02.409210   11368 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:12:02.409444   11368 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:12:02.409514   11368 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:12:02.409673   11368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:12:02.409749   11368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:12:02.409903   11368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:12:02.410062   11368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:12:02.410138   11368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:12:02.410298   11368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:12:02.410526   11368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:12:02.410600   11368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:12:02.453808   11368 out.go:252]   - Booting up control plane ...
	I1216 06:12:02.454792   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:12:02.455026   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:12:02.455098   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:12:02.455292   11368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:12:02.455588   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:12:02.455804   11368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:12:02.455984   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:12:02.456047   11368 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:12:02.456475   11368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:12:02.456689   11368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:12:02.456759   11368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000829212s
	I1216 06:12:02.456833   11368 kubeadm.go:319] 
	I1216 06:12:02.456918   11368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:12:02.457018   11368 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:12:02.457186   11368 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:12:02.457264   11368 kubeadm.go:319] 
	I1216 06:12:02.457466   11368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:12:02.457538   11368 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:12:02.457617   11368 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:12:02.457681   11368 kubeadm.go:319] 
	W1216 06:12:02.457840   11368 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829212s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:12:02.460957   11368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 06:12:02.923334   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:12:02.942284   11368 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:12:02.947934   11368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:12:02.960033   11368 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:12:02.960033   11368 kubeadm.go:158] found existing configuration files:
	
	I1216 06:12:02.963699   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:12:02.976249   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:12:02.980398   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:12:02.996745   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:12:03.010587   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:12:03.014857   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:12:03.033804   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:12:03.047258   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:12:03.052529   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:12:03.071112   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:12:03.084411   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:12:03.089634   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:12:03.107865   11368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:12:03.217980   11368 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:12:03.304403   11368 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:12:03.402507   11368 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:12:07.002051    6788 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (14.8240354s)
	I1216 06:12:07.002137    6788 kic.go:203] duration metric: took 14.8278391s to extract preloaded images to volume ...
	I1216 06:12:07.005779    6788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:12:07.230944    6788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:12:07.212321642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:12:07.234947    6788 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:12:07.472678    6788 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-030800 --name kindnet-030800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-030800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-030800 --network kindnet-030800 --ip 192.168.103.2 --volume kindnet-030800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:12:08.105890    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Running}}
	I1216 06:12:08.171938    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:08.232928    6788 cli_runner.go:164] Run: docker exec kindnet-030800 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:12:08.343285    6788 oci.go:144] the created container "kindnet-030800" has a running status.
	I1216 06:12:08.343285    6788 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa...
	I1216 06:12:08.510838    6788 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:12:08.587450    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:08.650452    6788 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:12:08.650452    6788 kic_runner.go:114] Args: [docker exec --privileged kindnet-030800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:12:08.809196    6788 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa...
	I1216 06:12:10.890772    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:10.954521    6788 machine.go:94] provisionDockerMachine start ...
	I1216 06:12:10.957521    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.008520    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:11.023115    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:11.023115    6788 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:12:11.199297    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-030800
	
	I1216 06:12:11.199297    6788 ubuntu.go:182] provisioning hostname "kindnet-030800"
	I1216 06:12:11.202294    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.259757    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:11.259806    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:11.259806    6788 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-030800 && echo "kindnet-030800" | sudo tee /etc/hostname
	I1216 06:12:11.458451    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-030800
	
	I1216 06:12:11.461723    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.518816    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:11.519151    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:11.519151    6788 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-030800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-030800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-030800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:12:11.682075    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:12:11.682075    6788 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:12:11.682075    6788 ubuntu.go:190] setting up certificates
	I1216 06:12:11.682075    6788 provision.go:84] configureAuth start
	I1216 06:12:11.685801    6788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-030800
	I1216 06:12:11.740638    6788 provision.go:143] copyHostCerts
	I1216 06:12:11.741639    6788 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:12:11.741639    6788 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:12:11.741639    6788 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:12:11.742643    6788 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:12:11.742643    6788 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:12:11.742643    6788 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:12:11.743641    6788 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:12:11.743641    6788 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:12:11.743641    6788 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:12:11.744645    6788 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-030800 san=[127.0.0.1 192.168.103.2 kindnet-030800 localhost minikube]
	I1216 06:12:11.931347    6788 provision.go:177] copyRemoteCerts
	I1216 06:12:11.935348    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:12:11.939351    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.996758    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:12.128806    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:12:12.157528    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 06:12:12.184855    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:12:12.209875    6788 provision.go:87] duration metric: took 527.7927ms to configureAuth
	I1216 06:12:12.209875    6788 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:12:12.209875    6788 config.go:182] Loaded profile config "kindnet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:12:12.214435    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:12.270503    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:12.270548    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:12.270548    6788 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:12:12.443739    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:12:12.443821    6788 ubuntu.go:71] root file system type: overlay
	I1216 06:12:12.443969    6788 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:12:12.447696    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:12.505748    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:12.505780    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:12.505780    6788 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:12:12.696827    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:12:12.700867    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:12.760030    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:12.760715    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:12.760715    6788 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:12:14.220671    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:12:12.685444205 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:12:14.220671    6788 machine.go:97] duration metric: took 3.2661054s to provisionDockerMachine
	I1216 06:12:14.220671    6788 client.go:176] duration metric: took 24.0182853s to LocalClient.Create
	I1216 06:12:14.220671    6788 start.go:167] duration metric: took 24.0182853s to libmachine.API.Create "kindnet-030800"
	I1216 06:12:14.220671    6788 start.go:293] postStartSetup for "kindnet-030800" (driver="docker")
	I1216 06:12:14.220671    6788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:12:14.225965    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:12:14.228654    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.286730    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:14.422175    6788 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:12:14.430679    6788 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:12:14.430679    6788 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:12:14.430679    6788 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:12:14.430679    6788 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:12:14.431304    6788 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:12:14.436062    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:12:14.447557    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:12:14.476598    6788 start.go:296] duration metric: took 255.9237ms for postStartSetup
	I1216 06:12:14.481857    6788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-030800
	I1216 06:12:14.534874    6788 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\config.json ...
	I1216 06:12:14.540932    6788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:12:14.544163    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.599153    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:14.738099    6788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:12:14.756903    6788 start.go:128] duration metric: took 24.5575075s to createHost
	I1216 06:12:14.756964    6788 start.go:83] releasing machines lock for "kindnet-030800", held for 24.5585685s
	I1216 06:12:14.761089    6788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-030800
	I1216 06:12:14.820995    6788 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:12:14.825383    6788 ssh_runner.go:195] Run: cat /version.json
	I1216 06:12:14.825455    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.828473    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.882924    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:14.883920    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:15.008272    6788 ssh_runner.go:195] Run: systemctl --version
	W1216 06:12:15.008961    6788 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:12:15.024976    6788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:12:15.035099    6788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:12:15.039160    6788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:12:15.088926    6788 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:12:15.089002    6788 start.go:496] detecting cgroup driver to use...
	I1216 06:12:15.089002    6788 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:12:15.089195    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1216 06:12:15.115148    6788 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:12:15.115148    6788 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:12:15.116205    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 06:12:15.133999    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:12:15.148544    6788 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:12:15.153402    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:12:15.173763    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:12:15.193174    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:12:15.211967    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:12:15.230814    6788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:12:15.248897    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:12:15.268590    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:12:15.286801    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:12:15.305083    6788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:12:15.323613    6788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:12:15.340787    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:15.499010    6788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:12:15.663518    6788 start.go:496] detecting cgroup driver to use...
	I1216 06:12:15.663548    6788 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:12:15.670359    6788 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:12:15.699486    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:12:15.720065    6788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:12:15.794660    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:12:15.815487    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:12:15.833957    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:12:15.857975    6788 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:12:15.872465    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:12:15.883658    6788 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 06:12:15.905854    6788 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:12:16.059572    6788 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:12:16.183220    6788 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:12:16.183220    6788 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:12:16.206253    6788 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:12:16.226683    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:16.363066    6788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:12:17.209602    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:12:17.234418    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:12:17.256030    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:12:17.281172    6788 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:12:17.429442    6788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:12:17.579817    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:17.730956    6788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:12:17.755884    6788 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:12:17.777180    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:17.927172    6788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:12:18.030003    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:12:18.048766    6788 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:12:18.055532    6788 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:12:18.064014    6788 start.go:564] Will wait 60s for crictl version
	I1216 06:12:18.069369    6788 ssh_runner.go:195] Run: which crictl
	I1216 06:12:18.080342    6788 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:12:18.125849    6788 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:12:18.129056    6788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:12:18.171478    6788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:12:18.208246    6788 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:12:18.212058    6788 cli_runner.go:164] Run: docker exec -t kindnet-030800 dig +short host.docker.internal
	I1216 06:12:18.346525    6788 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:12:18.351179    6788 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:12:18.360150    6788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:12:18.377467    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:18.431980    6788 kubeadm.go:884] updating cluster {Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:12:18.432155    6788 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:12:18.435467    6788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:12:18.470599    6788 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:12:18.470599    6788 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:12:18.474251    6788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:12:18.502607    6788 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:12:18.502607    6788 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:12:18.502607    6788 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 docker true true} ...
	I1216 06:12:18.502607    6788 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1216 06:12:18.506388    6788 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:12:18.578689    6788 cni.go:84] Creating CNI manager for "kindnet"
	I1216 06:12:18.578689    6788 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:12:18.578689    6788 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-030800 NodeName:kindnet-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:12:18.579341    6788 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kindnet-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:12:18.585628    6788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:12:18.597522    6788 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:12:18.601494    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:12:18.615009    6788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1216 06:12:18.637536    6788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:12:18.658037    6788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 06:12:18.688118    6788 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:12:18.695892    6788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:12:18.714307    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:18.850314    6788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:12:18.871857    6788 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800 for IP: 192.168.103.2
	I1216 06:12:18.871857    6788 certs.go:195] generating shared ca certs ...
	I1216 06:12:18.871857    6788 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:18.872460    6788 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:12:18.872580    6788 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:12:18.872580    6788 certs.go:257] generating profile certs ...
	I1216 06:12:18.873200    6788 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.key
	I1216 06:12:18.873250    6788 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.crt with IP's: []
	I1216 06:12:18.949253    6788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.crt ...
	I1216 06:12:18.949253    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.crt: {Name:mkf410fba892917bdd522929abe867e46494e3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:18.950237    6788 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.key ...
	I1216 06:12:18.950237    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.key: {Name:mkf29080c46ee2c14c10a21eb67c9cc815f21e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:18.951309    6788 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf
	I1216 06:12:18.951403    6788 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 06:12:19.114614    6788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf ...
	I1216 06:12:19.114614    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf: {Name:mkb55c42e33a2ae7870887e58b6e05f71dd4daf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.115619    6788 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf ...
	I1216 06:12:19.115619    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf: {Name:mk19d54f554eb9aa8025289f18eb07425aa3fc90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.116906    6788 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt
	I1216 06:12:19.131178    6788 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key
	I1216 06:12:19.132179    6788 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key
	I1216 06:12:19.132179    6788 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt with IP's: []
	I1216 06:12:19.184770    6788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt ...
	I1216 06:12:19.184770    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt: {Name:mkde61e113e82c5dc4f7e40e38dd7355210b095d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.185771    6788 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key ...
	I1216 06:12:19.185771    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key: {Name:mk44855c783f1633070400559fd3d672d6875e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.200773    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:12:19.200773    6788 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:12:19.201509    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:12:19.201643    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:12:19.201822    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:12:19.201993    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:12:19.202166    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:12:19.202444    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:12:19.237351    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:12:19.262028    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:12:19.287983    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:12:19.314234    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:12:19.339105    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:12:19.364652    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:12:19.396531    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:12:19.427432    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:12:19.459712    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:12:19.482706    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:12:19.510753    6788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:12:19.533021    6788 ssh_runner.go:195] Run: openssl version
	I1216 06:12:19.551437    6788 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.569271    6788 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:12:19.590136    6788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.598267    6788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.602512    6788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.651072    6788 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:12:19.666426    6788 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:12:19.681980    6788 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.696016    6788 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:12:19.714282    6788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.721158    6788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.725233    6788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.774540    6788 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:12:19.793803    6788 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:12:19.810823    6788 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.827895    6788 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:12:19.844802    6788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.853541    6788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.857849    6788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.905009    6788 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:12:19.921560    6788 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:12:19.939199    6788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:12:19.947504    6788 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:12:19.947719    6788 kubeadm.go:401] StartCluster: {Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:12:19.950360    6788 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:12:19.983797    6788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:12:20.000670    6788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:12:20.014790    6788 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:12:20.018800    6788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:12:20.032572    6788 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:12:20.032616    6788 kubeadm.go:158] found existing configuration files:
	
	I1216 06:12:20.036680    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:12:20.049905    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:12:20.054058    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:12:20.071603    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:12:20.085088    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:12:20.089085    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:12:20.106513    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:12:20.118805    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:12:20.122049    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:12:20.142303    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:12:20.154293    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:12:20.158297    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:12:20.174303    6788 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:12:20.296404    6788 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:12:20.301548    6788 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:12:20.397661    6788 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:12:33.968529    6788 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:12:33.968529    6788 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:12:33.968529    6788 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:12:33.969389    6788 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:12:33.969607    6788 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:12:33.969607    6788 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:12:33.972873    6788 out.go:252]   - Generating certificates and keys ...
	I1216 06:12:33.972873    6788 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:12:33.972873    6788 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:12:33.973434    6788 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:12:33.975209    6788 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:12:33.975828    6788 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:12:33.975933    6788 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:12:33.975933    6788 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:12:33.975933    6788 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:12:33.980556    6788 out.go:252]   - Booting up control plane ...
	I1216 06:12:33.980556    6788 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:12:33.981078    6788 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:12:33.981180    6788 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:12:33.981180    6788 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:12:33.981180    6788 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:12:33.981825    6788 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:12:33.981911    6788 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:12:33.981911    6788 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:12:33.981911    6788 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:12:33.982502    6788 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:12:33.982549    6788 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 503.041935ms
	I1216 06:12:33.982549    6788 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:12:33.982549    6788 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1216 06:12:33.982549    6788 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.898426957s
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.853187439s
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502086413s
	I1216 06:12:33.983821    6788 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:12:33.983995    6788 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:12:33.983995    6788 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:12:33.984608    6788 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:12:33.984704    6788 kubeadm.go:319] [bootstrap-token] Using token: xj3a70.p80jdqi9w7ogff39
	I1216 06:12:33.994781    6788 out.go:252]   - Configuring RBAC rules ...
	I1216 06:12:33.994781    6788 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:12:33.994781    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:12:33.994781    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:12:33.995784    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:12:33.995784    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:12:33.995784    6788 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:12:33.995784    6788 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:12:33.995784    6788 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:12:33.995784    6788 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:12:33.995784    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:12:33.996786    6788 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:12:33.996786    6788 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:12:33.996786    6788 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:12:33.997793    6788 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:12:33.997793    6788 kubeadm.go:319] 
	I1216 06:12:33.997912    6788 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:12:33.997912    6788 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:12:33.997912    6788 kubeadm.go:319] 
	I1216 06:12:33.997912    6788 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xj3a70.p80jdqi9w7ogff39 \
	I1216 06:12:33.998463    6788 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:12:33.998463    6788 kubeadm.go:319] 	--control-plane 
	I1216 06:12:33.998463    6788 kubeadm.go:319] 
	I1216 06:12:33.998463    6788 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:12:33.998463    6788 kubeadm.go:319] 
	I1216 06:12:33.998463    6788 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xj3a70.p80jdqi9w7ogff39 \
	I1216 06:12:33.999035    6788 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:12:33.999035    6788 cni.go:84] Creating CNI manager for "kindnet"
	I1216 06:12:34.001665    6788 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 06:12:34.007658    6788 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 06:12:34.019612    6788 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 06:12:34.019612    6788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 06:12:34.041663    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 06:12:34.320470    6788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:12:34.325898    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:34.325972    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-030800 minikube.k8s.io/updated_at=2025_12_16T06_12_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=kindnet-030800 minikube.k8s.io/primary=true
	I1216 06:12:34.337113    6788 ops.go:34] apiserver oom_adj: -16
	I1216 06:12:34.446144    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:34.947933    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:35.448308    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:35.947898    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:36.447700    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:36.946927    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:37.445777    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:37.947107    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:38.447683    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:38.538781    6788 kubeadm.go:1114] duration metric: took 4.2182542s to wait for elevateKubeSystemPrivileges
	I1216 06:12:38.538869    6788 kubeadm.go:403] duration metric: took 18.5909004s to StartCluster
	I1216 06:12:38.538924    6788 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:38.538924    6788 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:12:38.540348    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:38.541592    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:12:38.541592    6788 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:12:38.541543    6788 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:12:38.541748    6788 addons.go:70] Setting storage-provisioner=true in profile "kindnet-030800"
	I1216 06:12:38.541780    6788 addons.go:239] Setting addon storage-provisioner=true in "kindnet-030800"
	I1216 06:12:38.541927    6788 host.go:66] Checking if "kindnet-030800" exists ...
	I1216 06:12:38.541927    6788 addons.go:70] Setting default-storageclass=true in profile "kindnet-030800"
	I1216 06:12:38.541927    6788 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-030800"
	I1216 06:12:38.541927    6788 config.go:182] Loaded profile config "kindnet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:12:38.544488    6788 out.go:179] * Verifying Kubernetes components...
	I1216 06:12:38.550892    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:38.550892    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:38.553045    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:38.611835    6788 addons.go:239] Setting addon default-storageclass=true in "kindnet-030800"
	I1216 06:12:38.611835    6788 host.go:66] Checking if "kindnet-030800" exists ...
	I1216 06:12:38.612829    6788 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:12:38.615828    6788 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:12:38.615828    6788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:12:38.618830    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:38.619830    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:38.670833    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:38.671832    6788 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:12:38.671832    6788 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:12:38.674835    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:38.728830    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:38.786244    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:12:38.993052    6788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:12:39.294182    6788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:12:39.393642    6788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:12:39.901700    6788 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.1154417s)
	I1216 06:12:39.901700    6788 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:12:40.331433    6788 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.0372375s)
	I1216 06:12:40.331433    6788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3383629s)
	I1216 06:12:40.335284    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:40.389545    6788 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:12:40.393549    6788 addons.go:530] duration metric: took 1.8519322s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:12:40.400560    6788 node_ready.go:35] waiting up to 15m0s for node "kindnet-030800" to be "Ready" ...
	I1216 06:12:40.413561    6788 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-030800" context rescaled to 1 replicas
	W1216 06:12:42.406617    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:44.907499    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:47.406803    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:49.908547    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:52.408158    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:54.907731    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:56.908056    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:59.407002    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:13:01.407755    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	I1216 06:13:03.406403    6788 node_ready.go:49] node "kindnet-030800" is "Ready"
	I1216 06:13:03.406463    6788 node_ready.go:38] duration metric: took 23.0055942s for node "kindnet-030800" to be "Ready" ...
	I1216 06:13:03.406495    6788 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:13:03.411466    6788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:13:03.430701    6788 api_server.go:72] duration metric: took 24.8886193s to wait for apiserver process to appear ...
	I1216 06:13:03.430701    6788 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:13:03.430701    6788 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54866/healthz ...
	I1216 06:13:03.440994    6788 api_server.go:279] https://127.0.0.1:54866/healthz returned 200:
	ok
	I1216 06:13:03.443640    6788 api_server.go:141] control plane version: v1.34.2
	I1216 06:13:03.443640    6788 api_server.go:131] duration metric: took 12.9387ms to wait for apiserver health ...
	I1216 06:13:03.443640    6788 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:13:03.449411    6788 system_pods.go:59] 8 kube-system pods found
	I1216 06:13:03.449411    6788 system_pods.go:61] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.449411    6788 system_pods.go:61] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.449411    6788 system_pods.go:74] duration metric: took 5.7708ms to wait for pod list to return data ...
	I1216 06:13:03.449411    6788 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:13:03.454158    6788 default_sa.go:45] found service account: "default"
	I1216 06:13:03.454158    6788 default_sa.go:55] duration metric: took 4.7472ms for default service account to be created ...
	I1216 06:13:03.454158    6788 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:13:03.462563    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:03.462563    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.462563    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.462563    6788 retry.go:31] will retry after 200.474088ms: missing components: kube-dns
	I1216 06:13:03.671143    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:03.671143    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.671143    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.671143    6788 retry.go:31] will retry after 243.807956ms: missing components: kube-dns
	I1216 06:13:03.922250    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:03.922250    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.922250    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.922250    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.922250    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.922340    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.922340    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.922340    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.922374    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.922374    6788 retry.go:31] will retry after 406.562398ms: missing components: kube-dns
	I1216 06:13:04.338229    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:04.338229    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:04.338229    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:04.338820    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:04.338820    6788 retry.go:31] will retry after 404.864087ms: missing components: kube-dns
	I1216 06:13:04.751475    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:04.751475    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:04.751475    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:04.751475    6788 retry.go:31] will retry after 580.937637ms: missing components: kube-dns
	I1216 06:13:05.340705    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:05.340705    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Running
	I1216 06:13:05.340705    6788 system_pods.go:126] duration metric: took 1.8865217s to wait for k8s-apps to be running ...
	I1216 06:13:05.340705    6788 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:13:05.345162    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:13:05.363995    6788 system_svc.go:56] duration metric: took 23.2385ms WaitForService to wait for kubelet
	I1216 06:13:05.364042    6788 kubeadm.go:587] duration metric: took 26.8218872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:13:05.364042    6788 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:13:05.368328    6788 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:13:05.368328    6788 node_conditions.go:123] node cpu capacity is 16
	I1216 06:13:05.368328    6788 node_conditions.go:105] duration metric: took 4.2856ms to run NodePressure ...
	I1216 06:13:05.368328    6788 start.go:242] waiting for startup goroutines ...
	I1216 06:13:05.368328    6788 start.go:247] waiting for cluster config update ...
	I1216 06:13:05.368328    6788 start.go:256] writing updated cluster config ...
	I1216 06:13:05.373800    6788 ssh_runner.go:195] Run: rm -f paused
	I1216 06:13:05.381487    6788 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:13:05.388287    6788 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2klg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.395940    6788 pod_ready.go:94] pod "coredns-66bc5c9577-2klg5" is "Ready"
	I1216 06:13:05.395940    6788 pod_ready.go:86] duration metric: took 7.6527ms for pod "coredns-66bc5c9577-2klg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.402352    6788 pod_ready.go:83] waiting for pod "etcd-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.409558    6788 pod_ready.go:94] pod "etcd-kindnet-030800" is "Ready"
	I1216 06:13:05.409558    6788 pod_ready.go:86] duration metric: took 7.2054ms for pod "etcd-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.413805    6788 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.423218    6788 pod_ready.go:94] pod "kube-apiserver-kindnet-030800" is "Ready"
	I1216 06:13:05.423218    6788 pod_ready.go:86] duration metric: took 9.4134ms for pod "kube-apiserver-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.426944    6788 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.790782    6788 pod_ready.go:94] pod "kube-controller-manager-kindnet-030800" is "Ready"
	I1216 06:13:05.790782    6788 pod_ready.go:86] duration metric: took 363.8334ms for pod "kube-controller-manager-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.989561    6788 pod_ready.go:83] waiting for pod "kube-proxy-w78wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.398538    6788 pod_ready.go:94] pod "kube-proxy-w78wd" is "Ready"
	I1216 06:13:06.398538    6788 pod_ready.go:86] duration metric: took 408.972ms for pod "kube-proxy-w78wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.590868    6788 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.989680    6788 pod_ready.go:94] pod "kube-scheduler-kindnet-030800" is "Ready"
	I1216 06:13:06.989680    6788 pod_ready.go:86] duration metric: took 398.2881ms for pod "kube-scheduler-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.989680    6788 pod_ready.go:40] duration metric: took 1.6081714s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:13:07.082864    6788 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:13:07.089654    6788 out.go:179] * Done! kubectl is now configured to use "kindnet-030800" cluster and "default" namespace by default
	I1216 06:13:29.437822    7444 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:13:29.437822    7444 kubeadm.go:319] 
	I1216 06:13:29.438345    7444 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:13:29.442203    7444 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:13:29.442288    7444 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:13:29.442391    7444 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:13:29.442422    7444 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:13:29.442532    7444 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:13:29.442639    7444 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:13:29.442697    7444 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:13:29.443354    7444 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:13:29.443491    7444 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:13:29.444385    7444 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:13:29.444385    7444 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:13:29.444615    7444 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:13:29.445371    7444 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:13:29.445501    7444 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:13:29.445583    7444 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:13:29.445630    7444 kubeadm.go:319] OS: Linux
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:13:29.446464    7444 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:13:29.447176    7444 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:13:29.447176    7444 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:13:29.447176    7444 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:13:29.451165    7444 out.go:252]   - Generating certificates and keys ...
	I1216 06:13:29.451165    7444 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:13:29.451165    7444 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:13:29.453414    7444 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:13:29.453588    7444 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:13:29.453727    7444 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:13:29.453727    7444 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:13:29.453727    7444 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:13:29.457212    7444 out.go:252]   - Booting up control plane ...
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:13:29.457981    7444 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:13:29.458269    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:13:29.458458    7444 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:13:29.459071    7444 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:13:29.459187    7444 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.0010934s
	I1216 06:13:29.459234    7444 kubeadm.go:319] 
	I1216 06:13:29.459234    7444 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:13:29.459234    7444 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:13:29.459234    7444 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:13:29.459234    7444 kubeadm.go:319] 
	I1216 06:13:29.459809    7444 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:13:29.459809    7444 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:13:29.459809    7444 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:13:29.459809    7444 kubeadm.go:319] 
	W1216 06:13:29.459809    7444 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.0010934s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:13:29.463847    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 06:13:29.953578    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:13:29.979536    7444 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:13:29.985016    7444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:13:29.996493    7444 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:13:29.996493    7444 kubeadm.go:158] found existing configuration files:
	
	I1216 06:13:30.000490    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:13:30.012501    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:13:30.016488    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:13:30.031492    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:13:30.042509    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:13:30.046490    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:13:30.066672    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:13:30.081178    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:13:30.085494    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:13:30.103106    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:13:30.115159    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:13:30.119152    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:13:30.134150    7444 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:13:30.260471    7444 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:13:30.351419    7444 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:13:30.450039    7444 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:13:41.144775    1840 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:13:41.144775    1840 kubeadm.go:319] 
	I1216 06:13:41.144775    1840 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:13:41.148846    1840 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:13:41.149531    1840 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:13:41.149956    1840 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:13:41.150211    1840 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:13:41.150759    1840 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:13:41.150889    1840 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:13:41.151079    1840 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:13:41.151275    1840 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:13:41.151526    1840 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:13:41.151790    1840 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:13:41.153311    1840 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:13:41.153615    1840 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:13:41.153787    1840 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:13:41.154024    1840 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] OS: Linux
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:13:41.154727    1840 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:13:41.155306    1840 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:13:41.156052    1840 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:13:41.158898    1840 out.go:252]   - Generating certificates and keys ...
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:13:41.159722    1840 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:13:41.159918    1840 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:13:41.160046    1840 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:13:41.160705    1840 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:13:41.160782    1840 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:13:41.160887    1840 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:13:41.161622    1840 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:13:41.161622    1840 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:13:41.164114    1840 out.go:252]   - Booting up control plane ...
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:13:41.166093    1840 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:13:41.166093    1840 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:13:41.166093    1840 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000506958s
	I1216 06:13:41.166093    1840 kubeadm.go:319] 
	I1216 06:13:41.166093    1840 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:13:41.166093    1840 kubeadm.go:319] 
	I1216 06:13:41.166093    1840 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:13:41.167095    1840 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:13:41.167095    1840 kubeadm.go:319] 
	I1216 06:13:41.167095    1840 kubeadm.go:403] duration metric: took 8m4.2111844s to StartCluster
	I1216 06:13:41.167095    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:13:41.170749    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:13:41.232071    1840 cri.go:89] found id: ""
	I1216 06:13:41.232103    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.232153    1840 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:13:41.232153    1840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:13:41.237864    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:13:41.286666    1840 cri.go:89] found id: ""
	I1216 06:13:41.286666    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.286666    1840 logs.go:284] No container was found matching "etcd"
	I1216 06:13:41.286666    1840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:13:41.291424    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:13:41.333354    1840 cri.go:89] found id: ""
	I1216 06:13:41.333354    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.333354    1840 logs.go:284] No container was found matching "coredns"
	I1216 06:13:41.333354    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:13:41.337361    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:13:41.379362    1840 cri.go:89] found id: ""
	I1216 06:13:41.379362    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.379362    1840 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:13:41.379362    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:13:41.383354    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:13:41.434935    1840 cri.go:89] found id: ""
	I1216 06:13:41.434935    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.434935    1840 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:13:41.434935    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:13:41.438925    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:13:41.481929    1840 cri.go:89] found id: ""
	I1216 06:13:41.481929    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.481929    1840 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:13:41.481929    1840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:13:41.485920    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:13:41.530524    1840 cri.go:89] found id: ""
	I1216 06:13:41.530614    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.530614    1840 logs.go:284] No container was found matching "kindnet"
	I1216 06:13:41.530666    1840 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:13:41.530666    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:13:41.626225    1840 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:13:41.619073   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.620556   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.621534   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.622645   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.623449   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:13:41.619073   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.620556   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.621534   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.622645   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.623449   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:13:41.626225    1840 logs.go:123] Gathering logs for Docker ...
	I1216 06:13:41.626225    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:13:41.658338    1840 logs.go:123] Gathering logs for container status ...
	I1216 06:13:41.658338    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:13:41.703328    1840 logs.go:123] Gathering logs for kubelet ...
	I1216 06:13:41.703328    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:13:41.762322    1840 logs.go:123] Gathering logs for dmesg ...
	I1216 06:13:41.762322    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 06:13:41.799388    1840 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:13:41.799388    1840 out.go:285] * 
	W1216 06:13:41.799388    1840 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:13:41.799388    1840 out.go:285] * 
	W1216 06:13:41.801787    1840 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:13:41.811220    1840 out.go:203] 
	W1216 06:13:41.815157    1840 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:13:41.815157    1840 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:13:41.815157    1840 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:13:41.817851    1840 out.go:203] 
	
	
	==> Docker <==
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402735317Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402828927Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402844429Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402852530Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402861131Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402891834Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402934238Z" level=info msg="Initializing buildkit"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.580612363Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.589812059Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590000679Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590040684Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590028382Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:05:08 no-preload-686300 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:05:09 no-preload-686300 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:05:09 no-preload-686300 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:13:43.963072   11133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:43.964147   11133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:43.965019   11133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:43.967330   11133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:43.969426   11133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.809571] CPU: 0 PID: 390218 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f8788dabb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f8788dabaf6.
	[  +0.000001] RSP: 002b:00007ffd609e6e50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.827622] CPU: 14 PID: 390383 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fddca31bb20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7fddca31baf6.
	[  +0.000001] RSP: 002b:00007ffcdf5a88f0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +7.540385] tmpfs: Unknown parameter 'noswap'
	[  +9.462694] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 06:13:44 up  1:50,  0 user,  load average: 2.53, 4.03, 3.95
	Linux no-preload-686300 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:13:40 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:41 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 16 06:13:41 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:41 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:41 no-preload-686300 kubelet[10855]: E1216 06:13:41.194311   10855 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:41 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:41 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:41 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 06:13:41 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:41 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:41 no-preload-686300 kubelet[10986]: E1216 06:13:41.950215   10986 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:41 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:41 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:42 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 16 06:13:42 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:42 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:42 no-preload-686300 kubelet[10999]: E1216 06:13:42.696395   10999 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:42 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:42 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:43 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 16 06:13:43 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:43 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:43 no-preload-686300 kubelet[11027]: E1216 06:13:43.445619   11027 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:43 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:43 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300: exit status 6 (575.1954ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:13:45.122926    7188 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-686300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-686300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (532.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (520.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-256200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0
E1216 06:09:03.996307   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:09:04.003686   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:09:04.015695   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:09:04.037580   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:09:04.079849   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:09:04.161734   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:09:04.324537   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:09:04.645935   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:09:05.288291   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:09:06.570408   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:09:09.132911   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-256200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m36.5632855s)

                                                
                                                
-- stdout --
	* [newest-cni-256200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "newest-cni-256200" primary control-plane node in "newest-cni-256200" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:08:55.473293    7444 out.go:360] Setting OutFile to fd 1196 ...
	I1216 06:08:55.521543    7444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:08:55.521543    7444 out.go:374] Setting ErrFile to fd 1600...
	I1216 06:08:55.521543    7444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:08:55.540266    7444 out.go:368] Setting JSON to false
	I1216 06:08:55.542987    7444 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6357,"bootTime":1765858978,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:08:55.542987    7444 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:08:55.548231    7444 out.go:179] * [newest-cni-256200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:08:55.551241    7444 notify.go:221] Checking for updates...
	I1216 06:08:55.551241    7444 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:08:55.553232    7444 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:08:55.555233    7444 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:08:55.560232    7444 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:08:55.564232    7444 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:08:55.568344    7444 config.go:182] Loaded profile config "default-k8s-diff-port-292200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:08:55.568925    7444 config.go:182] Loaded profile config "kubernetes-upgrade-633300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:08:55.568925    7444 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:08:55.569578    7444 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:08:55.688935    7444 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:08:55.691937    7444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:08:55.916599    7444 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:08:55.896743881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:08:55.919607    7444 out.go:179] * Using the docker driver based on user configuration
	I1216 06:08:55.922600    7444 start.go:309] selected driver: docker
	I1216 06:08:55.922600    7444 start.go:927] validating driver "docker" against <nil>
	I1216 06:08:55.922600    7444 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:08:55.974606    7444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:08:56.230220    7444 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:08:56.210841236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:08:56.231221    7444 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1216 06:08:56.231221    7444 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1216 06:08:56.231221    7444 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 06:08:56.340858    7444 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:08:56.347193    7444 cni.go:84] Creating CNI manager for ""
	I1216 06:08:56.347507    7444 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:08:56.347507    7444 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 06:08:56.347761    7444 start.go:353] cluster config:
	{Name:newest-cni-256200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:08:56.352881    7444 out.go:179] * Starting "newest-cni-256200" primary control-plane node in "newest-cni-256200" cluster
	I1216 06:08:56.354691    7444 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:08:56.359017    7444 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:08:56.362114    7444 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:08:56.362114    7444 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:08:56.362823    7444 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 06:08:56.362823    7444 cache.go:65] Caching tarball of preloaded images
	I1216 06:08:56.363042    7444 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:08:56.363042    7444 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 06:08:56.363042    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\config.json ...
	I1216 06:08:56.363579    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\config.json: {Name:mke1be0d938bb076c2f3975473f38b388672ecab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:08:56.441292    7444 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:08:56.441292    7444 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:08:56.441350    7444 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:08:56.441350    7444 start.go:360] acquireMachinesLock for newest-cni-256200: {Name:mk3285fa9eff9b8fb8b7734006d0edc9845e0471 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:08:56.441350    7444 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-256200"
	I1216 06:08:56.441350    7444 start.go:93] Provisioning new machine with config: &{Name:newest-cni-256200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256200 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:08:56.441350    7444 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:08:56.457950    7444 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:08:56.458586    7444 start.go:159] libmachine.API.Create for "newest-cni-256200" (driver="docker")
	I1216 06:08:56.458586    7444 client.go:173] LocalClient.Create starting
	I1216 06:08:56.459243    7444 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:08:56.459243    7444 main.go:143] libmachine: Decoding PEM data...
	I1216 06:08:56.459243    7444 main.go:143] libmachine: Parsing certificate...
	I1216 06:08:56.459920    7444 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:08:56.459920    7444 main.go:143] libmachine: Decoding PEM data...
	I1216 06:08:56.459920    7444 main.go:143] libmachine: Parsing certificate...
	I1216 06:08:56.463423    7444 cli_runner.go:164] Run: docker network inspect newest-cni-256200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:08:56.514412    7444 cli_runner.go:211] docker network inspect newest-cni-256200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:08:56.517417    7444 network_create.go:284] running [docker network inspect newest-cni-256200] to gather additional debugging logs...
	I1216 06:08:56.517417    7444 cli_runner.go:164] Run: docker network inspect newest-cni-256200
	W1216 06:08:56.568600    7444 cli_runner.go:211] docker network inspect newest-cni-256200 returned with exit code 1
	I1216 06:08:56.569028    7444 network_create.go:287] error running [docker network inspect newest-cni-256200]: docker network inspect newest-cni-256200: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-256200 not found
	I1216 06:08:56.569056    7444 network_create.go:289] output of [docker network inspect newest-cni-256200]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-256200 not found
	
	** /stderr **
	I1216 06:08:56.572824    7444 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:08:56.652698    7444 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:08:56.668263    7444 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:08:56.684247    7444 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:08:56.700336    7444 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:08:56.713495    7444 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00180cf30}
	I1216 06:08:56.713495    7444 network_create.go:124] attempt to create docker network newest-cni-256200 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1216 06:08:56.716495    7444 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-256200 newest-cni-256200
	W1216 06:08:56.769488    7444 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-256200 newest-cni-256200 returned with exit code 1
	W1216 06:08:56.769488    7444 network_create.go:149] failed to create docker network newest-cni-256200 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-256200 newest-cni-256200: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1216 06:08:56.769488    7444 network_create.go:116] failed to create docker network newest-cni-256200 192.168.85.0/24, will retry: subnet is taken
	I1216 06:08:56.792649    7444 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:08:56.808041    7444 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001869bf0}
	I1216 06:08:56.808041    7444 network_create.go:124] attempt to create docker network newest-cni-256200 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 06:08:56.812072    7444 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-256200 newest-cni-256200
	I1216 06:08:56.954999    7444 network_create.go:108] docker network newest-cni-256200 192.168.94.0/24 created
	I1216 06:08:56.955076    7444 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-256200" container
	I1216 06:08:56.966803    7444 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:08:57.025801    7444 cli_runner.go:164] Run: docker volume create newest-cni-256200 --label name.minikube.sigs.k8s.io=newest-cni-256200 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:08:57.084489    7444 oci.go:103] Successfully created a docker volume newest-cni-256200
	I1216 06:08:57.088647    7444 cli_runner.go:164] Run: docker run --rm --name newest-cni-256200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256200 --entrypoint /usr/bin/test -v newest-cni-256200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:08:58.364538    7444 cli_runner.go:217] Completed: docker run --rm --name newest-cni-256200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256200 --entrypoint /usr/bin/test -v newest-cni-256200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.2758478s)
	I1216 06:08:58.364538    7444 oci.go:107] Successfully prepared a docker volume newest-cni-256200
	I1216 06:08:58.364538    7444 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:08:58.364538    7444 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:08:58.368381    7444 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-256200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:09:13.982090    7444 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-256200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (15.6133991s)
	I1216 06:09:13.982142    7444 kic.go:203] duration metric: took 15.6173982s to extract preloaded images to volume ...
	I1216 06:09:13.987139    7444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:09:14.234501    7444 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:09:14.215229995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:09:14.237498    7444 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:09:14.460639    7444 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-256200 --name newest-cni-256200 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-256200 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-256200 --network newest-cni-256200 --ip 192.168.94.2 --volume newest-cni-256200:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:09:15.139677    7444 cli_runner.go:164] Run: docker container inspect newest-cni-256200 --format={{.State.Running}}
	I1216 06:09:15.201440    7444 cli_runner.go:164] Run: docker container inspect newest-cni-256200 --format={{.State.Status}}
	I1216 06:09:15.254444    7444 cli_runner.go:164] Run: docker exec newest-cni-256200 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:09:15.364499    7444 oci.go:144] the created container "newest-cni-256200" has a running status.
	I1216 06:09:15.364499    7444 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa...
	I1216 06:09:15.601612    7444 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:09:15.685591    7444 cli_runner.go:164] Run: docker container inspect newest-cni-256200 --format={{.State.Status}}
	I1216 06:09:15.746590    7444 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:09:15.747592    7444 kic_runner.go:114] Args: [docker exec --privileged newest-cni-256200 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:09:15.868290    7444 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa...
	I1216 06:09:17.943803    7444 cli_runner.go:164] Run: docker container inspect newest-cni-256200 --format={{.State.Status}}
	I1216 06:09:17.996269    7444 machine.go:94] provisionDockerMachine start ...
	I1216 06:09:18.000114    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:18.057995    7444 main.go:143] libmachine: Using SSH client type: native
	I1216 06:09:18.072473    7444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54657 <nil> <nil>}
	I1216 06:09:18.072537    7444 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:09:18.233555    7444 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256200
	
	I1216 06:09:18.233555    7444 ubuntu.go:182] provisioning hostname "newest-cni-256200"
	I1216 06:09:18.239609    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:18.295136    7444 main.go:143] libmachine: Using SSH client type: native
	I1216 06:09:18.295729    7444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54657 <nil> <nil>}
	I1216 06:09:18.295729    7444 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256200 && echo "newest-cni-256200" | sudo tee /etc/hostname
	I1216 06:09:18.487647    7444 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256200
	
	I1216 06:09:18.491370    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:18.551252    7444 main.go:143] libmachine: Using SSH client type: native
	I1216 06:09:18.551252    7444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54657 <nil> <nil>}
	I1216 06:09:18.551252    7444 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256200/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:09:18.729041    7444 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:09:18.729086    7444 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:09:18.729115    7444 ubuntu.go:190] setting up certificates
	I1216 06:09:18.729115    7444 provision.go:84] configureAuth start
	I1216 06:09:18.732813    7444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256200
	I1216 06:09:18.789862    7444 provision.go:143] copyHostCerts
	I1216 06:09:18.790179    7444 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:09:18.790179    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:09:18.790179    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:09:18.791458    7444 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:09:18.791458    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:09:18.791458    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:09:18.792653    7444 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:09:18.792653    7444 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:09:18.792913    7444 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:09:18.793607    7444 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-256200 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-256200]
	I1216 06:09:19.022408    7444 provision.go:177] copyRemoteCerts
	I1216 06:09:19.026285    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:09:19.029931    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:19.083549    7444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54657 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	I1216 06:09:19.201389    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:09:19.230931    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:09:19.254191    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 06:09:19.278832    7444 provision.go:87] duration metric: took 549.6643ms to configureAuth
	I1216 06:09:19.278832    7444 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:09:19.279357    7444 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:09:19.282923    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:19.339605    7444 main.go:143] libmachine: Using SSH client type: native
	I1216 06:09:19.340321    7444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54657 <nil> <nil>}
	I1216 06:09:19.340321    7444 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:09:19.511201    7444 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:09:19.511254    7444 ubuntu.go:71] root file system type: overlay
	I1216 06:09:19.511323    7444 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:09:19.514944    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:19.574194    7444 main.go:143] libmachine: Using SSH client type: native
	I1216 06:09:19.574276    7444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54657 <nil> <nil>}
	I1216 06:09:19.574276    7444 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:09:19.769754    7444 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:09:19.773521    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:19.829370    7444 main.go:143] libmachine: Using SSH client type: native
	I1216 06:09:19.829370    7444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54657 <nil> <nil>}
	I1216 06:09:19.829370    7444 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:09:21.283254    7444 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:09:19.753063151 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:09:21.283319    7444 machine.go:97] duration metric: took 3.2870062s to provisionDockerMachine
	I1216 06:09:21.283319    7444 client.go:176] duration metric: took 24.8244049s to LocalClient.Create
	I1216 06:09:21.283379    7444 start.go:167] duration metric: took 24.8244651s to libmachine.API.Create "newest-cni-256200"
	I1216 06:09:21.283379    7444 start.go:293] postStartSetup for "newest-cni-256200" (driver="docker")
	I1216 06:09:21.283379    7444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:09:21.287544    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:09:21.291030    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:21.345540    7444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54657 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	I1216 06:09:21.477563    7444 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:09:21.487222    7444 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:09:21.487222    7444 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:09:21.487222    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:09:21.487795    7444 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:09:21.488376    7444 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:09:21.495613    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:09:21.510284    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:09:21.538104    7444 start.go:296] duration metric: took 254.7222ms for postStartSetup
	I1216 06:09:21.543829    7444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256200
	I1216 06:09:21.599921    7444 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\config.json ...
	I1216 06:09:21.606458    7444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:09:21.608625    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:21.661940    7444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54657 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	I1216 06:09:21.793113    7444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:09:21.802678    7444 start.go:128] duration metric: took 25.3604645s to createHost
	I1216 06:09:21.802678    7444 start.go:83] releasing machines lock for "newest-cni-256200", held for 25.3609929s
	I1216 06:09:21.807307    7444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256200
	I1216 06:09:21.860940    7444 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:09:21.864409    7444 ssh_runner.go:195] Run: cat /version.json
	I1216 06:09:21.865057    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:21.866955    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:21.922650    7444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54657 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	I1216 06:09:21.926046    7444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54657 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	W1216 06:09:22.046555    7444 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:09:22.053279    7444 ssh_runner.go:195] Run: systemctl --version
	I1216 06:09:22.067661    7444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:09:22.076444    7444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:09:22.081646    7444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:09:22.131477    7444 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:09:22.131477    7444 start.go:496] detecting cgroup driver to use...
	I1216 06:09:22.131477    7444 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:09:22.131477    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1216 06:09:22.155087    7444 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:09:22.155191    7444 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:09:22.159031    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 06:09:22.179881    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:09:22.195646    7444 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:09:22.202317    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:09:22.222155    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:09:22.242561    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:09:22.263068    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:09:22.284693    7444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:09:22.304021    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:09:22.322549    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:09:22.341514    7444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:09:22.361142    7444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:09:22.378347    7444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:09:22.396361    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:09:22.528524    7444 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:09:22.703335    7444 start.go:496] detecting cgroup driver to use...
	I1216 06:09:22.703335    7444 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:09:22.707595    7444 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:09:22.737358    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:09:22.758361    7444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:09:22.816918    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:09:22.842165    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:09:22.862038    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:09:22.892590    7444 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:09:22.905662    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:09:22.919606    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 06:09:22.943563    7444 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:09:23.086985    7444 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:09:23.242765    7444 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:09:23.242765    7444 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:09:23.266992    7444 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:09:23.288122    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:09:23.415987    7444 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:09:24.298601    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:09:24.318602    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:09:24.339602    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:09:24.362122    7444 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:09:24.525635    7444 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:09:24.671602    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:09:24.819194    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:09:24.849058    7444 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:09:24.871871    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:09:25.036257    7444 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:09:25.141946    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:09:25.164831    7444 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:09:25.170029    7444 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:09:25.177935    7444 start.go:564] Will wait 60s for crictl version
	I1216 06:09:25.181753    7444 ssh_runner.go:195] Run: which crictl
	I1216 06:09:25.195314    7444 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:09:25.240214    7444 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:09:25.243208    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:09:25.286783    7444 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:09:25.341548    7444 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 06:09:25.347121    7444 cli_runner.go:164] Run: docker exec -t newest-cni-256200 dig +short host.docker.internal
	I1216 06:09:25.487726    7444 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:09:25.492423    7444 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:09:25.504004    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:09:25.522065    7444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:09:25.584163    7444 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 06:09:25.586337    7444 kubeadm.go:884] updating cluster {Name:newest-cni-256200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:09:25.586531    7444 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:09:25.589798    7444 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:09:25.637880    7444 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:09:25.637880    7444 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:09:25.641923    7444 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:09:25.675808    7444 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:09:25.675808    7444 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:09:25.675869    7444 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 docker true true} ...
	I1216 06:09:25.675917    7444 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:09:25.679558    7444 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:09:25.770521    7444 cni.go:84] Creating CNI manager for ""
	I1216 06:09:25.770521    7444 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:09:25.770521    7444 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 06:09:25.770521    7444 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256200 NodeName:newest-cni-256200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:09:25.770521    7444 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-256200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:09:25.774895    7444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:09:25.787540    7444 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:09:25.792493    7444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:09:25.808624    7444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 06:09:25.825612    7444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:09:25.842605    7444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1216 06:09:25.864609    7444 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:09:25.871613    7444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:09:25.889606    7444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:09:26.038260    7444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:09:26.062948    7444 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200 for IP: 192.168.94.2
	I1216 06:09:26.062948    7444 certs.go:195] generating shared ca certs ...
	I1216 06:09:26.062948    7444 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:09:26.063886    7444 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:09:26.063918    7444 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:09:26.063918    7444 certs.go:257] generating profile certs ...
	I1216 06:09:26.064553    7444 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\client.key
	I1216 06:09:26.064685    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\client.crt with IP's: []
	I1216 06:09:26.085029    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\client.crt ...
	I1216 06:09:26.085029    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\client.crt: {Name:mk370f15666060c1646827bb4c4066a3ef55468b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:09:26.086679    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\client.key ...
	I1216 06:09:26.086746    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\client.key: {Name:mk828e09f0a8c599e30a608ca488ff85f3102d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:09:26.087878    7444 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.key.6f0b4644
	I1216 06:09:26.087878    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.crt.6f0b4644 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1216 06:09:26.170165    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.crt.6f0b4644 ...
	I1216 06:09:26.170165    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.crt.6f0b4644: {Name:mk302f19aa8ae22a1fbb9d671202aec869f572d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:09:26.171153    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.key.6f0b4644 ...
	I1216 06:09:26.171153    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.key.6f0b4644: {Name:mk48e64edafbd77f87f07449e5efd46ea06bb0c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:09:26.172154    7444 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.crt.6f0b4644 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.crt
	I1216 06:09:26.186164    7444 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.key.6f0b4644 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.key
	I1216 06:09:26.187160    7444 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\proxy-client.key
	I1216 06:09:26.187160    7444 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\proxy-client.crt with IP's: []
	I1216 06:09:26.251268    7444 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\proxy-client.crt ...
	I1216 06:09:26.251268    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\proxy-client.crt: {Name:mk35d8a827ff960007c91a928013d50e313377b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:09:26.251426    7444 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\proxy-client.key ...
	I1216 06:09:26.251426    7444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\proxy-client.key: {Name:mk5d826200b72921d477375cdf47b66cc3aaf991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:09:26.266349    7444 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:09:26.267334    7444 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:09:26.267334    7444 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:09:26.267334    7444 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:09:26.267334    7444 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:09:26.267334    7444 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:09:26.267334    7444 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:09:26.268340    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:09:26.299851    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:09:26.328759    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:09:26.353913    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:09:26.383720    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:09:26.419910    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:09:26.445915    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:09:26.472910    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:09:26.495911    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:09:26.520913    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:09:26.542907    7444 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:09:26.566857    7444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:09:26.588506    7444 ssh_runner.go:195] Run: openssl version
	I1216 06:09:26.602767    7444 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:09:26.618657    7444 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:09:26.634154    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:09:26.642726    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:09:26.647026    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:09:26.695239    7444 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:09:26.712236    7444 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:09:26.728241    7444 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:09:26.742234    7444 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:09:26.756235    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:09:26.763235    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:09:26.766235    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:09:26.813965    7444 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:09:26.835879    7444 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:09:26.853305    7444 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:09:26.872719    7444 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:09:26.890579    7444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:09:26.902587    7444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:09:26.906027    7444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:09:26.969300    7444 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:09:26.984289    7444 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:09:27.004353    7444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:09:27.014541    7444 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:09:27.014568    7444 kubeadm.go:401] StartCluster: {Name:newest-cni-256200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:09:27.018456    7444 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:09:27.058762    7444 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:09:27.080326    7444 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:09:27.092097    7444 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:09:27.096521    7444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:09:27.112294    7444 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:09:27.112294    7444 kubeadm.go:158] found existing configuration files:
	
	I1216 06:09:27.116464    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:09:27.128036    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:09:27.132037    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:09:27.146027    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:09:27.157042    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:09:27.161028    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:09:27.176825    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:09:27.188371    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:09:27.191789    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:09:27.211024    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:09:27.222040    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:09:27.226575    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:09:27.241198    7444 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:09:27.362458    7444 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:09:27.447128    7444 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:09:27.550535    7444 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:13:29.437822    7444 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:13:29.437822    7444 kubeadm.go:319] 
	I1216 06:13:29.438345    7444 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:13:29.442203    7444 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:13:29.442288    7444 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:13:29.442391    7444 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:13:29.442422    7444 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:13:29.442532    7444 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:13:29.442639    7444 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:13:29.442697    7444 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:13:29.443354    7444 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:13:29.443491    7444 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:13:29.444385    7444 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:13:29.444385    7444 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:13:29.444615    7444 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:13:29.445371    7444 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:13:29.445501    7444 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:13:29.445583    7444 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:13:29.445630    7444 kubeadm.go:319] OS: Linux
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:13:29.446464    7444 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:13:29.447176    7444 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:13:29.447176    7444 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:13:29.447176    7444 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:13:29.451165    7444 out.go:252]   - Generating certificates and keys ...
	I1216 06:13:29.451165    7444 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:13:29.451165    7444 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:13:29.453414    7444 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:13:29.453588    7444 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:13:29.453727    7444 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:13:29.453727    7444 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:13:29.453727    7444 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:13:29.457212    7444 out.go:252]   - Booting up control plane ...
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:13:29.457981    7444 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:13:29.458269    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:13:29.458458    7444 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:13:29.459071    7444 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:13:29.459187    7444 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.0010934s
	I1216 06:13:29.459234    7444 kubeadm.go:319] 
	I1216 06:13:29.459234    7444 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:13:29.459234    7444 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:13:29.459234    7444 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:13:29.459234    7444 kubeadm.go:319] 
	I1216 06:13:29.459809    7444 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:13:29.459809    7444 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:13:29.459809    7444 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:13:29.459809    7444 kubeadm.go:319] 
	W1216 06:13:29.459809    7444 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.0010934s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.0010934s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:13:29.463847    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 06:13:29.953578    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:13:29.979536    7444 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:13:29.985016    7444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:13:29.996493    7444 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:13:29.996493    7444 kubeadm.go:158] found existing configuration files:
	
	I1216 06:13:30.000490    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:13:30.012501    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:13:30.016488    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:13:30.031492    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:13:30.042509    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:13:30.046490    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:13:30.066672    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:13:30.081178    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:13:30.085494    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:13:30.103106    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:13:30.115159    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:13:30.119152    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:13:30.134150    7444 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:13:30.260471    7444 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:13:30.351419    7444 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:13:30.450039    7444 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:17:31.092610    7444 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 06:17:31.092741    7444 kubeadm.go:319] 
	I1216 06:17:31.093470    7444 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:17:31.099820    7444 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:17:31.100783    7444 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:17:31.100783    7444 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:17:31.100783    7444 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:17:31.100783    7444 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:17:31.100783    7444 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:17:31.101967    7444 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:17:31.102005    7444 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:17:31.102005    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:17:31.102005    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:17:31.102005    7444 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:17:31.102536    7444 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:17:31.102707    7444 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:17:31.102899    7444 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:17:31.103093    7444 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] OS: Linux
	I1216 06:17:31.104167    7444 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:17:31.104314    7444 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:17:31.105204    7444 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:17:31.105393    7444 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:17:31.105570    7444 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:17:31.105745    7444 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:17:31.115935    7444 out.go:252]   - Generating certificates and keys ...
	I1216 06:17:31.115935    7444 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:17:31.115935    7444 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:17:31.117942    7444 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:17:31.117942    7444 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:17:31.118942    7444 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:17:31.121689    7444 out.go:252]   - Booting up control plane ...
	I1216 06:17:31.121689    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:17:31.122230    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:17:31.122332    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:17:31.122517    7444 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:17:31.122517    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:17:31.122517    7444 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:17:31.123262    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:17:31.123388    7444 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:17:31.123575    7444 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:17:31.123575    7444 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:17:31.123575    7444 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000088487s
	I1216 06:17:31.123575    7444 kubeadm.go:319] 
	I1216 06:17:31.123575    7444 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:17:31.124275    7444 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:17:31.124641    7444 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:17:31.124697    7444 kubeadm.go:319] 
	I1216 06:17:31.124826    7444 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:17:31.124826    7444 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:17:31.124826    7444 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:17:31.124826    7444 kubeadm.go:319] 
	I1216 06:17:31.124826    7444 kubeadm.go:403] duration metric: took 8m4.1037503s to StartCluster
	I1216 06:17:31.125361    7444 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:17:31.129814    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:17:31.195782    7444 cri.go:89] found id: ""
	I1216 06:17:31.195840    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.195840    7444 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:17:31.195840    7444 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:17:31.201335    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:17:31.244231    7444 cri.go:89] found id: ""
	I1216 06:17:31.244282    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.244282    7444 logs.go:284] No container was found matching "etcd"
	I1216 06:17:31.244282    7444 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:17:31.248874    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:17:31.298415    7444 cri.go:89] found id: ""
	I1216 06:17:31.298503    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.298503    7444 logs.go:284] No container was found matching "coredns"
	I1216 06:17:31.298503    7444 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:17:31.303984    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:17:31.345786    7444 cri.go:89] found id: ""
	I1216 06:17:31.345786    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.345786    7444 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:17:31.345786    7444 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:17:31.350541    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:17:31.393153    7444 cri.go:89] found id: ""
	I1216 06:17:31.393153    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.393153    7444 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:17:31.393153    7444 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:17:31.400134    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:17:31.460134    7444 cri.go:89] found id: ""
	I1216 06:17:31.460134    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.460134    7444 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:17:31.460134    7444 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:17:31.465139    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:17:31.523126    7444 cri.go:89] found id: ""
	I1216 06:17:31.523126    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.523126    7444 logs.go:284] No container was found matching "kindnet"
	I1216 06:17:31.523126    7444 logs.go:123] Gathering logs for container status ...
	I1216 06:17:31.523126    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:17:31.599050    7444 logs.go:123] Gathering logs for kubelet ...
	I1216 06:17:31.599050    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:17:31.694219    7444 logs.go:123] Gathering logs for dmesg ...
	I1216 06:17:31.694219    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:17:31.735217    7444 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:17:31.735217    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:17:31.846635    7444 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:17:31.835022   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.836094   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.837759   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.840112   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.842430   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:17:31.835022   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.836094   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.837759   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.840112   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.842430   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:17:31.846635    7444 logs.go:123] Gathering logs for Docker ...
	I1216 06:17:31.846635    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:17:31.887628    7444 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000088487s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:17:31.887628    7444 out.go:285] * 
	* 
	W1216 06:17:31.887628    7444 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000088487s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000088487s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:17:31.887628    7444 out.go:285] * 
	* 
	W1216 06:17:31.890631    7444 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:17:31.902634    7444 out.go:203] 
	W1216 06:17:31.906624    7444 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000088487s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000088487s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:17:31.906624    7444 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:17:31.906624    7444 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:17:31.911624    7444 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-256200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-256200
helpers_test.go:244: (dbg) docker inspect newest-cni-256200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66",
	        "Created": "2025-12-16T06:09:14.512792797Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 365050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:09:14.825267122Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/hostname",
	        "HostsPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/hosts",
	        "LogPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66-json.log",
	        "Name": "/newest-cni-256200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-256200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-256200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-256200",
	                "Source": "/var/lib/docker/volumes/newest-cni-256200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-256200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-256200",
	                "name.minikube.sigs.k8s.io": "newest-cni-256200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "771bfa7da2ead2842ed10177b89bf5ef2e45e3b61880ef998eb1675462cefe49",
	            "SandboxKey": "/var/run/docker/netns/771bfa7da2ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54657"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54658"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54659"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54660"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54661"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-256200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c97a08422fb6ea0a0f62c56d96c89be84aa4e33beba1ccaa82b7390e64b42c8e",
	                    "EndpointID": "8751925f2ee7cf9dc88323a2eb80efce9560f4ef2a0abb3571b1e150a2032db4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-256200",
	                        "144d2cf5befb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200: exit status 6 (731.3468ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:17:33.183849   10440 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-256200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-256200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-256200 logs -n 25: (1.2064743s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │    PROFILE    │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-030800 sudo systemctl status kubelet --all --full --no-pager                                     │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo systemctl cat kubelet --no-pager                                                     │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo cat /etc/kubernetes/kubelet.conf                                                     │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo cat /var/lib/kubelet/config.yaml                                                     │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo systemctl status docker --all --full --no-pager                                      │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo systemctl cat docker --no-pager                                                      │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo cat /etc/docker/daemon.json                                                          │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo docker system info                                                                   │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo systemctl status cri-docker --all --full --no-pager                                  │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo systemctl cat cri-docker --no-pager                                                  │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo cri-dockerd --version                                                                │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo systemctl status containerd --all --full --no-pager                                  │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo systemctl cat containerd --no-pager                                                  │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo cat /lib/systemd/system/containerd.service                                           │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo cat /etc/containerd/config.toml                                                      │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo containerd config dump                                                               │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo systemctl status crio --all --full --no-pager                                        │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	│ ssh     │ -p calico-030800 sudo systemctl cat crio --no-pager                                                        │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ ssh     │ -p calico-030800 sudo crio config                                                                          │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ delete  │ -p calico-030800                                                                                           │ calico-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │ 16 Dec 25 06:16 UTC │
	│ start   │ -p false-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker │ false-030800  │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:16 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:16:56
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:16:56.380396    3264 out.go:360] Setting OutFile to fd 1800 ...
	I1216 06:16:56.431780    3264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:56.431780    3264 out.go:374] Setting ErrFile to fd 1972...
	I1216 06:16:56.431780    3264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:16:56.448932    3264 out.go:368] Setting JSON to false
	I1216 06:16:56.452185    3264 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6838,"bootTime":1765858978,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:16:56.452185    3264 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:16:56.459449    3264 out.go:179] * [false-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:16:56.463846    3264 notify.go:221] Checking for updates...
	I1216 06:16:56.466918    3264 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:16:56.470602    3264 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:16:56.473281    3264 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:16:56.475532    3264 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:16:56.478545    3264 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:16:56.481581    3264 config.go:182] Loaded profile config "custom-flannel-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:16:56.482160    3264 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:16:56.482511    3264 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:16:56.482681    3264 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:16:56.600339    3264 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:16:56.603339    3264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:16:56.858152    3264 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:16:56.833074267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:16:56.877481    3264 out.go:179] * Using the docker driver based on user configuration
	I1216 06:16:56.886484    3264 start.go:309] selected driver: docker
	I1216 06:16:56.886484    3264 start.go:927] validating driver "docker" against <nil>
	I1216 06:16:56.886484    3264 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:16:56.932244    3264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:16:57.184279    3264 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:16:57.164192368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:16:57.184279    3264 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:16:57.185276    3264 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:16:57.189275    3264 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:16:57.191271    3264 cni.go:84] Creating CNI manager for "false"
	I1216 06:16:57.191271    3264 start.go:353] cluster config:
	{Name:false-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:false-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1216 06:16:57.193279    3264 out.go:179] * Starting "false-030800" primary control-plane node in "false-030800" cluster
	I1216 06:16:57.197271    3264 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:16:57.200272    3264 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:16:57.202271    3264 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:16:57.202271    3264 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:16:57.202271    3264 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:16:57.202271    3264 cache.go:65] Caching tarball of preloaded images
	I1216 06:16:57.203287    3264 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:16:57.203287    3264 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:16:57.203287    3264 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\config.json ...
	I1216 06:16:57.203287    3264 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\config.json: {Name:mk5d0e528d0c5ec7e88227b2fbd1584bc72a7bd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:57.285059    3264 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:16:57.285059    3264 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:16:57.285059    3264 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:16:57.285059    3264 start.go:360] acquireMachinesLock for false-030800: {Name:mk9e01369981f4da3e7dccd357fcba767f36194b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:16:57.285059    3264 start.go:364] duration metric: took 0s to acquireMachinesLock for "false-030800"
	I1216 06:16:57.285059    3264 start.go:93] Provisioning new machine with config: &{Name:false-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:false-030800 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:16:57.285664    3264 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:16:54.473914   10692 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:16:54.477650   10692 cli_runner.go:164] Run: docker exec -t custom-flannel-030800 dig +short host.docker.internal
	I1216 06:16:54.607585   10692 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:16:54.612869   10692 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:16:54.622618   10692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:16:54.705022   10692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-030800
	I1216 06:16:54.761904   10692 kubeadm.go:884] updating cluster {Name:custom-flannel-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-030800 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disable
CoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:16:54.761904   10692 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:16:54.764988   10692 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:16:54.799370   10692 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:16:54.799370   10692 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:16:54.802658   10692 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:16:54.857309   10692 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:16:54.857358   10692 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:16:54.857358   10692 kubeadm.go:935] updating node { 192.168.112.2 8443 v1.34.2 docker true true} ...
	I1216 06:16:54.857578   10692 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.112.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml}
	I1216 06:16:54.862462   10692 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:16:54.954461   10692 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1216 06:16:54.954461   10692 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:16:54.954461   10692 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.112.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-030800 NodeName:custom-flannel-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.112.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.112.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:16:54.954461   10692 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.112.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "custom-flannel-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.112.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.112.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:16:54.958461   10692 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:16:54.974549   10692 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:16:54.979807   10692 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:16:54.992835   10692 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1216 06:16:55.020322   10692 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:16:55.038131   10692 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1216 06:16:55.062838   10692 ssh_runner.go:195] Run: grep 192.168.112.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:16:55.072412   10692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.112.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:16:55.090240   10692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:16:55.227154   10692 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:16:55.248171   10692 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800 for IP: 192.168.112.2
	I1216 06:16:55.248171   10692 certs.go:195] generating shared ca certs ...
	I1216 06:16:55.248171   10692 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:55.248872   10692 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:16:55.249152   10692 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:16:55.249348   10692 certs.go:257] generating profile certs ...
	I1216 06:16:55.249756   10692 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\client.key
	I1216 06:16:55.249855   10692 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\client.crt with IP's: []
	I1216 06:16:55.419517   10692 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\client.crt ...
	I1216 06:16:55.420520   10692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\client.crt: {Name:mk1c69b15088e78b85b6ad4f34de454135227dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:55.420691   10692 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\client.key ...
	I1216 06:16:55.420691   10692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\client.key: {Name:mk3c3b73041b46f88548ebf26d903dd78bdf682c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:55.421691   10692 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.key.17968127
	I1216 06:16:55.422468   10692 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.crt.17968127 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.112.2]
	I1216 06:16:55.548169   10692 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.crt.17968127 ...
	I1216 06:16:55.548169   10692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.crt.17968127: {Name:mkecadf9713d9868eb4864b48179f91c1ad04dfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:55.549167   10692 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.key.17968127 ...
	I1216 06:16:55.549167   10692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.key.17968127: {Name:mk01329f9e1956f61e7c0e001b6b471877cc8231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:55.550365   10692 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.crt.17968127 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.crt
	I1216 06:16:55.564269   10692 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.key.17968127 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.key
	I1216 06:16:55.564672   10692 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\proxy-client.key
	I1216 06:16:55.564672   10692 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\proxy-client.crt with IP's: []
	I1216 06:16:55.952724   10692 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\proxy-client.crt ...
	I1216 06:16:55.952724   10692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\proxy-client.crt: {Name:mk285590889e25080e15a8858880bd6e0c7b295a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:55.954074   10692 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\proxy-client.key ...
	I1216 06:16:55.954120   10692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\proxy-client.key: {Name:mk7b5660d2aadcf340743f3137790cd68acb117b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:55.970311   10692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:16:55.970910   10692 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:16:55.970910   10692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:16:55.970910   10692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:16:55.971591   10692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:16:55.971808   10692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:16:55.972040   10692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:16:55.972378   10692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:16:56.000354   10692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:16:56.023825   10692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:16:56.052461   10692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:16:56.075769   10692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 06:16:56.118967   10692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:16:56.156249   10692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:16:56.189643   10692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:16:56.217701   10692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:16:56.241695   10692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:16:56.268705   10692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:16:56.312704   10692 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:16:56.341629   10692 ssh_runner.go:195] Run: openssl version
	I1216 06:16:56.356743   10692 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:16:56.375798   10692 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:16:56.396228   10692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:16:56.406237   10692 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:16:56.412901   10692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:16:56.490298   10692 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:16:56.506947   10692 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:16:56.523417   10692 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:56.544339   10692 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:16:56.559338   10692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:56.565339   10692 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:56.569341   10692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:56.620155   10692 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:16:56.637046   10692 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:16:56.658191   10692 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:16:56.704016   10692 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:16:56.732320   10692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:16:56.744923   10692 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:16:56.750514   10692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:16:56.797935   10692 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:16:56.814951   10692 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:16:56.840406   10692 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:16:56.847875   10692 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:16:56.847875   10692 kubeadm.go:401] StartCluster: {Name:custom-flannel-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCor
eDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:16:56.853318   10692 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:16:56.892487   10692 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:16:56.908482   10692 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:16:56.924089   10692 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:16:56.929394   10692 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:16:56.954826   10692 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:16:56.954826   10692 kubeadm.go:158] found existing configuration files:
	
	I1216 06:16:56.959394   10692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:16:56.977904   10692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:16:56.984905   10692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:16:57.009907   10692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:16:57.030907   10692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:16:57.036902   10692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:16:57.058902   10692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:16:57.073901   10692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:16:57.077905   10692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:16:57.093906   10692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:16:57.105905   10692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:16:57.109907   10692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:16:57.129120   10692 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:16:57.260219   10692 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:16:57.264027   10692 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:16:57.389326   10692 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 06:16:54.034119    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:16:57.288640    3264 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:16:57.289314    3264 start.go:159] libmachine.API.Create for "false-030800" (driver="docker")
	I1216 06:16:57.289360    3264 client.go:173] LocalClient.Create starting
	I1216 06:16:57.289733    3264 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:16:57.289733    3264 main.go:143] libmachine: Decoding PEM data...
	I1216 06:16:57.289733    3264 main.go:143] libmachine: Parsing certificate...
	I1216 06:16:57.289733    3264 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:16:57.290316    3264 main.go:143] libmachine: Decoding PEM data...
	I1216 06:16:57.290316    3264 main.go:143] libmachine: Parsing certificate...
	I1216 06:16:57.293777    3264 cli_runner.go:164] Run: docker network inspect false-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:16:57.347349    3264 cli_runner.go:211] docker network inspect false-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:16:57.351721    3264 network_create.go:284] running [docker network inspect false-030800] to gather additional debugging logs...
	I1216 06:16:57.351782    3264 cli_runner.go:164] Run: docker network inspect false-030800
	W1216 06:16:57.409544    3264 cli_runner.go:211] docker network inspect false-030800 returned with exit code 1
	I1216 06:16:57.409544    3264 network_create.go:287] error running [docker network inspect false-030800]: docker network inspect false-030800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network false-030800 not found
	I1216 06:16:57.409544    3264 network_create.go:289] output of [docker network inspect false-030800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network false-030800 not found
	
	** /stderr **
	I1216 06:16:57.413248    3264 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:16:57.491396    3264 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:16:57.506544    3264 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:16:57.522470    3264 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:16:57.552180    3264 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:16:57.567480    3264 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:16:57.582636    3264 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:16:57.613219    3264 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:16:57.628941    3264 network.go:209] skipping subnet 192.168.112.0/24 that is reserved: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:16:57.642078    3264 network.go:206] using free private subnet 192.168.121.0/24: &{IP:192.168.121.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.121.0/24 Gateway:192.168.121.1 ClientMin:192.168.121.2 ClientMax:192.168.121.254 Broadcast:192.168.121.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015f9e60}
	I1216 06:16:57.642078    3264 network_create.go:124] attempt to create docker network false-030800 192.168.121.0/24 with gateway 192.168.121.1 and MTU of 1500 ...
	I1216 06:16:57.645057    3264 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.121.0/24 --gateway=192.168.121.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-030800 false-030800
	I1216 06:16:57.777626    3264 network_create.go:108] docker network false-030800 192.168.121.0/24 created
	I1216 06:16:57.777626    3264 kic.go:121] calculated static IP "192.168.121.2" for the "false-030800" container
	I1216 06:16:57.787052    3264 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:16:57.849337    3264 cli_runner.go:164] Run: docker volume create false-030800 --label name.minikube.sigs.k8s.io=false-030800 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:16:57.912352    3264 oci.go:103] Successfully created a docker volume false-030800
	I1216 06:16:57.918430    3264 cli_runner.go:164] Run: docker run --rm --name false-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-030800 --entrypoint /usr/bin/test -v false-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:16:59.635972    3264 cli_runner.go:217] Completed: docker run --rm --name false-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-030800 --entrypoint /usr/bin/test -v false-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.7174757s)
	I1216 06:16:59.635972    3264 oci.go:107] Successfully prepared a docker volume false-030800
	I1216 06:16:59.635972    3264 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:16:59.635972    3264 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:16:59.640626    3264 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	W1216 06:17:04.069203    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:17:06.470180    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:17:06.550561    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:17:06.550677    2100 retry.go:31] will retry after 18.192310247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:17:11.445259    3264 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (11.804473s)
	I1216 06:17:11.445259    3264 kic.go:203] duration metric: took 11.8091272s to extract preloaded images to volume ...
	I1216 06:17:11.449531    3264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:17:11.699167    3264 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:17:11.681229632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:17:11.703172    3264 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:17:11.958209    3264 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-030800 --name false-030800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-030800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-030800 --network false-030800 --ip 192.168.121.2 --volume false-030800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:17:12.808241    3264 cli_runner.go:164] Run: docker container inspect false-030800 --format={{.State.Running}}
	I1216 06:17:12.869258    3264 cli_runner.go:164] Run: docker container inspect false-030800 --format={{.State.Status}}
	I1216 06:17:12.926243    3264 cli_runner.go:164] Run: docker exec false-030800 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:17:13.037993    3264 oci.go:144] the created container "false-030800" has a running status.
	I1216 06:17:13.037993    3264 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-030800\id_rsa...
	I1216 06:17:13.089571    3264 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-030800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:17:13.167835    3264 cli_runner.go:164] Run: docker container inspect false-030800 --format={{.State.Status}}
	I1216 06:17:13.225846    3264 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:17:13.225846    3264 kic_runner.go:114] Args: [docker exec --privileged false-030800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:17:13.366314    3264 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-030800\id_rsa...
	I1216 06:17:15.611552    3264 cli_runner.go:164] Run: docker container inspect false-030800 --format={{.State.Status}}
	I1216 06:17:15.666587    3264 machine.go:94] provisionDockerMachine start ...
	I1216 06:17:15.670400    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:15.733893    3264 main.go:143] libmachine: Using SSH client type: native
	I1216 06:17:15.747643    3264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55387 <nil> <nil>}
	I1216 06:17:15.747643    3264 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:17:15.939731    3264 main.go:143] libmachine: SSH cmd err, output: <nil>: false-030800
	
	I1216 06:17:15.939731    3264 ubuntu.go:182] provisioning hostname "false-030800"
	I1216 06:17:15.943593    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:16.006382    3264 main.go:143] libmachine: Using SSH client type: native
	I1216 06:17:16.006846    3264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55387 <nil> <nil>}
	I1216 06:17:16.006880    3264 main.go:143] libmachine: About to run SSH command:
	sudo hostname false-030800 && echo "false-030800" | sudo tee /etc/hostname
	I1216 06:17:16.199921    3264 main.go:143] libmachine: SSH cmd err, output: <nil>: false-030800
	
	I1216 06:17:16.203943    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:16.264470    3264 main.go:143] libmachine: Using SSH client type: native
	I1216 06:17:16.265442    3264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55387 <nil> <nil>}
	I1216 06:17:16.265442    3264 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-030800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-030800/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-030800' | sudo tee -a /etc/hosts; 
				fi
			fi
	W1216 06:17:14.103030    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:17:14.767395    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:17:14.860797    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:17:14.860797    2100 retry.go:31] will retry after 32.78252651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:17:15.769955    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:17:15.874160    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:17:15.874160    2100 retry.go:31] will retry after 22.812506175s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:17:20.164984   10692 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:17:20.165268   10692 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:17:20.165624   10692 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:17:20.165851   10692 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:17:20.166145   10692 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:17:20.166145   10692 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:17:20.168162   10692 out.go:252]   - Generating certificates and keys ...
	I1216 06:17:20.168162   10692 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:17:20.168162   10692 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:17:20.168162   10692 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:17:20.168162   10692 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:17:20.168162   10692 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:17:20.169148   10692 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:17:20.169148   10692 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:17:20.169148   10692 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-030800 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1216 06:17:20.169148   10692 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:17:20.169148   10692 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-030800 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1216 06:17:20.169148   10692 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:17:20.169148   10692 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:17:20.170140   10692 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:17:20.170140   10692 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:17:20.170140   10692 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:17:20.170140   10692 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:17:20.170140   10692 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:17:20.170140   10692 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:17:20.170140   10692 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:17:20.171145   10692 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:17:20.171145   10692 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:17:20.175161   10692 out.go:252]   - Booting up control plane ...
	I1216 06:17:20.175161   10692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:17:20.175161   10692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:17:20.176158   10692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:17:20.176158   10692 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:17:20.176158   10692 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:17:20.176158   10692 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:17:20.176158   10692 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:17:20.176158   10692 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:17:20.177154   10692 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:17:20.177154   10692 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:17:20.177154   10692 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001684747s
	I1216 06:17:20.177154   10692 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:17:20.177154   10692 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.112.2:8443/livez
	I1216 06:17:20.177154   10692 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:17:20.178171   10692 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:17:20.178171   10692 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 12.23450758s
	I1216 06:17:20.178171   10692 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 13.618361829s
	I1216 06:17:20.178171   10692 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 16.00267799s
	I1216 06:17:20.178171   10692 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:17:20.178171   10692 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:17:20.178171   10692 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:17:20.179151   10692 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:17:20.179151   10692 kubeadm.go:319] [bootstrap-token] Using token: o96s28.kxlv1ikkw8kii7gl
	I1216 06:17:20.183142   10692 out.go:252]   - Configuring RBAC rules ...
	I1216 06:17:20.183142   10692 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:17:20.183142   10692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:17:20.184143   10692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:17:20.184143   10692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:17:20.184143   10692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:17:20.184143   10692 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:17:20.184143   10692 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:17:20.184143   10692 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:17:20.185151   10692 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:17:20.185151   10692 kubeadm.go:319] 
	I1216 06:17:20.185151   10692 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:17:20.185151   10692 kubeadm.go:319] 
	I1216 06:17:20.185151   10692 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:17:20.185151   10692 kubeadm.go:319] 
	I1216 06:17:20.185151   10692 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:17:20.185151   10692 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:17:20.185151   10692 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:17:20.185151   10692 kubeadm.go:319] 
	I1216 06:17:20.185151   10692 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:17:20.185151   10692 kubeadm.go:319] 
	I1216 06:17:20.185151   10692 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:17:20.185151   10692 kubeadm.go:319] 
	I1216 06:17:20.185151   10692 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:17:20.186152   10692 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:17:20.186152   10692 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:17:20.186152   10692 kubeadm.go:319] 
	I1216 06:17:20.186152   10692 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:17:20.186152   10692 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:17:20.186152   10692 kubeadm.go:319] 
	I1216 06:17:20.186152   10692 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token o96s28.kxlv1ikkw8kii7gl \
	I1216 06:17:20.186152   10692 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:17:20.186152   10692 kubeadm.go:319] 	--control-plane 
	I1216 06:17:20.186152   10692 kubeadm.go:319] 
	I1216 06:17:20.187149   10692 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:17:20.187149   10692 kubeadm.go:319] 
	I1216 06:17:20.187149   10692 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token o96s28.kxlv1ikkw8kii7gl \
	I1216 06:17:20.187149   10692 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:17:20.187149   10692 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1216 06:17:20.191169   10692 out.go:179] * Configuring testdata\kube-flannel.yaml (Container Networking Interface) ...
	I1216 06:17:16.431724    3264 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:17:16.431724    3264 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:17:16.431724    3264 ubuntu.go:190] setting up certificates
	I1216 06:17:16.431724    3264 provision.go:84] configureAuth start
	I1216 06:17:16.435831    3264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-030800
	I1216 06:17:16.494374    3264 provision.go:143] copyHostCerts
	I1216 06:17:16.494374    3264 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:17:16.494374    3264 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:17:16.495369    3264 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:17:16.496373    3264 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:17:16.496373    3264 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:17:16.496373    3264 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:17:16.497369    3264 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:17:16.497369    3264 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:17:16.497369    3264 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:17:16.498373    3264 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.false-030800 san=[127.0.0.1 192.168.121.2 false-030800 localhost minikube]
	I1216 06:17:16.544634    3264 provision.go:177] copyRemoteCerts
	I1216 06:17:16.549307    3264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:17:16.552357    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:16.614569    3264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55387 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-030800\id_rsa Username:docker}
	I1216 06:17:16.744348    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:17:16.769721    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 06:17:16.792719    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 06:17:16.818460    3264 provision.go:87] duration metric: took 386.73ms to configureAuth
	I1216 06:17:16.818571    3264 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:17:16.818942    3264 config.go:182] Loaded profile config "false-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:17:16.822647    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:16.879712    3264 main.go:143] libmachine: Using SSH client type: native
	I1216 06:17:16.880252    3264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55387 <nil> <nil>}
	I1216 06:17:16.880291    3264 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:17:17.053482    3264 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:17:17.053482    3264 ubuntu.go:71] root file system type: overlay
	I1216 06:17:17.053482    3264 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:17:17.057946    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:17.115103    3264 main.go:143] libmachine: Using SSH client type: native
	I1216 06:17:17.115103    3264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55387 <nil> <nil>}
	I1216 06:17:17.115669    3264 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:17:17.293277    3264 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:17:17.297151    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:17.356498    3264 main.go:143] libmachine: Using SSH client type: native
	I1216 06:17:17.356498    3264 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55387 <nil> <nil>}
	I1216 06:17:17.356498    3264 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:17:18.911439    3264 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:17:17.276707222 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:17:18.911439    3264 machine.go:97] duration metric: took 3.2448083s to provisionDockerMachine
	I1216 06:17:18.911439    3264 client.go:176] duration metric: took 21.6217857s to LocalClient.Create
	I1216 06:17:18.911439    3264 start.go:167] duration metric: took 21.6218317s to libmachine.API.Create "false-030800"
	I1216 06:17:18.911439    3264 start.go:293] postStartSetup for "false-030800" (driver="docker")
	I1216 06:17:18.911439    3264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:17:18.916781    3264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:17:18.920356    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:18.975580    3264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55387 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-030800\id_rsa Username:docker}
	I1216 06:17:19.110714    3264 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:17:19.119103    3264 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:17:19.119103    3264 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:17:19.119103    3264 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:17:19.119894    3264 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:17:19.120610    3264 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:17:19.125545    3264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:17:19.137709    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:17:19.166822    3264 start.go:296] duration metric: took 255.3211ms for postStartSetup
	I1216 06:17:19.172455    3264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-030800
	I1216 06:17:19.226990    3264 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\config.json ...
	I1216 06:17:19.234262    3264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:17:19.238648    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:19.294152    3264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55387 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-030800\id_rsa Username:docker}
	I1216 06:17:19.416278    3264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:17:19.424305    3264 start.go:128] duration metric: took 22.1383399s to createHost
	I1216 06:17:19.424305    3264 start.go:83] releasing machines lock for "false-030800", held for 22.1389444s
	I1216 06:17:19.427841    3264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-030800
	I1216 06:17:19.483417    3264 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:17:19.487926    3264 ssh_runner.go:195] Run: cat /version.json
	I1216 06:17:19.487926    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:19.491933    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:19.544959    3264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55387 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-030800\id_rsa Username:docker}
	I1216 06:17:19.546012    3264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55387 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\false-030800\id_rsa Username:docker}
	W1216 06:17:19.656760    3264 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:17:19.661748    3264 ssh_runner.go:195] Run: systemctl --version
	I1216 06:17:19.676765    3264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:17:19.685753    3264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:17:19.689759    3264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1216 06:17:19.713755    3264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1216 06:17:19.735754    3264 cni.go:308] configured [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:17:19.735754    3264 start.go:496] detecting cgroup driver to use...
	I1216 06:17:19.735754    3264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:17:19.735754    3264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:17:19.766605    3264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1216 06:17:19.766605    3264 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:17:19.766605    3264 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:17:19.787815    3264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:17:19.800601    3264 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:17:19.806244    3264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:17:19.825631    3264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:17:19.841636    3264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:17:19.868075    3264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:17:19.887991    3264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:17:19.905011    3264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:17:19.921993    3264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:17:19.937998    3264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:17:19.959080    3264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:17:19.979666    3264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:17:19.998659    3264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:17:20.143812    3264 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:17:20.325617    3264 start.go:496] detecting cgroup driver to use...
	I1216 06:17:20.325617    3264 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:17:20.332381    3264 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:17:20.358625    3264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:17:20.386728    3264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:17:20.461735    3264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:17:20.486780    3264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:17:20.505879    3264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:17:20.531058    3264 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:17:20.542016    3264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:17:20.553612    3264 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 06:17:20.577181    3264 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:17:20.734640    3264 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:17:20.898805    3264 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:17:20.898805    3264 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:17:20.926920    3264 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:17:20.951592    3264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:17:21.106403    3264 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:17:20.221478   10692 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 06:17:20.225734   10692 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1216 06:17:20.236064   10692 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1216 06:17:20.236085   10692 ssh_runner.go:362] scp testdata\kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1216 06:17:20.266871   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 06:17:20.718097   10692 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:17:20.723934   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:17:20.723934   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-030800 minikube.k8s.io/updated_at=2025_12_16T06_17_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=custom-flannel-030800 minikube.k8s.io/primary=true
	I1216 06:17:20.736652   10692 ops.go:34] apiserver oom_adj: -16
	I1216 06:17:20.907137   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:17:21.407470   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:17:21.907879   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:17:22.409190   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:17:22.054646    3264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:17:22.076037    3264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:17:22.099099    3264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:17:22.125009    3264 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:17:22.268139    3264 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:17:22.424842    3264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:17:22.579117    3264 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:17:22.605240    3264 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:17:22.627254    3264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:17:22.767514    3264 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:17:22.897896    3264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:17:22.919850    3264 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:17:22.924368    3264 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:17:22.935048    3264 start.go:564] Will wait 60s for crictl version
	I1216 06:17:22.940293    3264 ssh_runner.go:195] Run: which crictl
	I1216 06:17:22.954561    3264 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:17:22.996744    3264 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:17:23.001662    3264 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:17:23.047384    3264 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:17:22.909871   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:17:23.407872   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:17:23.907109   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:17:24.407214   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:17:24.906587   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:17:25.408081   10692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:17:25.506289   10692 kubeadm.go:1114] duration metric: took 4.7881266s to wait for elevateKubeSystemPrivileges
	I1216 06:17:25.506289   10692 kubeadm.go:403] duration metric: took 28.6580238s to StartCluster
	I1216 06:17:25.506289   10692 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:17:25.506289   10692 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:17:25.508287   10692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:17:25.509511   10692 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:17:25.509576   10692 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:17:25.509740   10692 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:17:25.509740   10692 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-030800"
	I1216 06:17:25.509740   10692 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-030800"
	I1216 06:17:25.509740   10692 host.go:66] Checking if "custom-flannel-030800" exists ...
	I1216 06:17:25.509740   10692 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-030800"
	I1216 06:17:25.509740   10692 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-030800"
	I1216 06:17:25.509740   10692 config.go:182] Loaded profile config "custom-flannel-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:17:25.513596   10692 out.go:179] * Verifying Kubernetes components...
	I1216 06:17:25.520329   10692 cli_runner.go:164] Run: docker container inspect custom-flannel-030800 --format={{.State.Status}}
	I1216 06:17:25.523908   10692 cli_runner.go:164] Run: docker container inspect custom-flannel-030800 --format={{.State.Status}}
	I1216 06:17:25.523908   10692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:17:25.585881   10692 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-030800"
	I1216 06:17:25.585881   10692 host.go:66] Checking if "custom-flannel-030800" exists ...
	I1216 06:17:25.586864   10692 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:17:23.092882    3264 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:17:23.095599    3264 cli_runner.go:164] Run: docker exec -t false-030800 dig +short host.docker.internal
	I1216 06:17:23.228930    3264 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:17:23.233506    3264 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:17:23.243419    3264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:17:23.261349    3264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-030800
	I1216 06:17:23.318473    3264 kubeadm.go:884] updating cluster {Name:false-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:false-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:17:23.318656    3264 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:17:23.322025    3264 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:17:23.357156    3264 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:17:23.357156    3264 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:17:23.361899    3264 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:17:23.397434    3264 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:17:23.397434    3264 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:17:23.397434    3264 kubeadm.go:935] updating node { 192.168.121.2 8443 v1.34.2 docker true true} ...
	I1216 06:17:23.397434    3264 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=false-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.121.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:false-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false}
	I1216 06:17:23.402011    3264 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:17:23.489510    3264 cni.go:84] Creating CNI manager for "false"
	I1216 06:17:23.489510    3264 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:17:23.489559    3264 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.121.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-030800 NodeName:false-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.121.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.121.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:17:23.489777    3264 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.121.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "false-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.121.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.121.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:17:23.494410    3264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:17:23.506750    3264 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:17:23.512476    3264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:17:23.523700    3264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1216 06:17:23.545571    3264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:17:23.566661    3264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1216 06:17:23.590523    3264 ssh_runner.go:195] Run: grep 192.168.121.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:17:23.599193    3264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.121.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:17:23.617964    3264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:17:23.776794    3264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:17:23.801612    3264 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800 for IP: 192.168.121.2
	I1216 06:17:23.801664    3264 certs.go:195] generating shared ca certs ...
	I1216 06:17:23.801744    3264 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:17:23.802410    3264 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:17:23.802635    3264 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:17:23.802785    3264 certs.go:257] generating profile certs ...
	I1216 06:17:23.803159    3264 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\client.key
	I1216 06:17:23.803261    3264 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\client.crt with IP's: []
	I1216 06:17:23.934059    3264 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\client.crt ...
	I1216 06:17:23.934059    3264 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\client.crt: {Name:mk6842be7fceaa7634f44ab8061905a36cd5dfdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:17:23.934595    3264 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\client.key ...
	I1216 06:17:23.934595    3264 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\client.key: {Name:mk9138c525746d554a61ae623dc5ae9c49c8aefa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:17:23.935582    3264 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.key.694e70dc
	I1216 06:17:23.935582    3264 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.crt.694e70dc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.121.2]
	I1216 06:17:23.959840    3264 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.crt.694e70dc ...
	I1216 06:17:23.959840    3264 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.crt.694e70dc: {Name:mk7e2d0d6bc3f3c4b66cc542437ccd2251110e13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:17:23.961567    3264 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.key.694e70dc ...
	I1216 06:17:23.961567    3264 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.key.694e70dc: {Name:mk7b939ace724ab98ed88eb53863b1d22e99b50d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:17:23.962483    3264 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.crt.694e70dc -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.crt
	I1216 06:17:23.976032    3264 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.key.694e70dc -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.key
	I1216 06:17:23.977749    3264 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\proxy-client.key
	I1216 06:17:23.977952    3264 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\proxy-client.crt with IP's: []
	I1216 06:17:24.075809    3264 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\proxy-client.crt ...
	I1216 06:17:24.075809    3264 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\proxy-client.crt: {Name:mkbee8b8aca836da583326a04fba913890540132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:17:24.076806    3264 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\proxy-client.key ...
	I1216 06:17:24.076806    3264 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\proxy-client.key: {Name:mkb1164a0db1a96adde0baa7978456a73caccfc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:17:24.092016    3264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:17:24.092016    3264 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:17:24.092016    3264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:17:24.092016    3264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:17:24.093292    3264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:17:24.093292    3264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:17:24.093859    3264 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:17:24.094527    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:17:24.124938    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:17:24.152584    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:17:24.177423    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:17:24.203693    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:17:24.227417    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:17:24.256282    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:17:24.287950    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:17:24.311033    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:17:24.339809    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:17:24.366514    3264 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:17:24.395963    3264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:17:24.419288    3264 ssh_runner.go:195] Run: openssl version
	I1216 06:17:24.432283    3264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:17:24.448190    3264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:17:24.463014    3264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:17:24.472975    3264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:17:24.477584    3264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:17:24.530242    3264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:17:24.545333    3264 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:17:24.561823    3264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:17:24.580352    3264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:17:24.599966    3264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:17:24.607684    3264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:17:24.612149    3264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:17:24.662987    3264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:17:24.679639    3264 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:17:24.696751    3264 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:17:24.713292    3264 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:17:24.731877    3264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:17:24.741329    3264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:17:24.746013    3264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:17:24.794780    3264 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:17:24.813665    3264 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:17:24.828591    3264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:17:24.835597    3264 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:17:24.835597    3264 kubeadm.go:401] StartCluster: {Name:false-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:false-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:17:24.838588    3264 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:17:24.870591    3264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:17:24.885607    3264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:17:24.897601    3264 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:17:24.901596    3264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:17:24.912602    3264 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:17:24.912602    3264 kubeadm.go:158] found existing configuration files:
	
	I1216 06:17:24.917606    3264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:17:24.931748    3264 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:17:24.936732    3264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:17:24.953381    3264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:17:24.964375    3264 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:17:24.968380    3264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:17:24.984377    3264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:17:24.996375    3264 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:17:25.001387    3264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:17:25.016372    3264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:17:25.030386    3264 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:17:25.033382    3264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:17:25.050652    3264 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:17:25.169830    3264 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:17:25.175951    3264 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:17:25.283100    3264 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:17:25.588862   10692 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:17:25.588862   10692 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:17:25.592873   10692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-030800
	I1216 06:17:25.594876   10692 cli_runner.go:164] Run: docker container inspect custom-flannel-030800 --format={{.State.Status}}
	I1216 06:17:25.646868   10692 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:17:25.646868   10692 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:17:25.647867   10692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55308 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-030800\id_rsa Username:docker}
	I1216 06:17:25.650880   10692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-030800
	I1216 06:17:25.706862   10692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55308 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-030800\id_rsa Username:docker}
	I1216 06:17:25.754203   10692 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:17:25.888872   10692 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:17:25.960244   10692 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:17:26.080059   10692 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:17:26.366132   10692 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:17:26.874924   10692 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-030800" context rescaled to 1 replicas
	I1216 06:17:27.071517   10692 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.1112569s)
	I1216 06:17:27.072515   10692 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1826283s)
	I1216 06:17:27.076527   10692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-030800
	I1216 06:17:27.135518   10692 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:17:27.136516   10692 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-030800" to be "Ready" ...
	I1216 06:17:27.138516   10692 addons.go:530] duration metric: took 1.6287528s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1216 06:17:24.136492    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:17:24.748010    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:17:24.827592    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:17:24.828591    2100 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:17:31.092610    7444 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1216 06:17:31.092741    7444 kubeadm.go:319] 
	I1216 06:17:31.093470    7444 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:17:31.099820    7444 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:17:31.100783    7444 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:17:31.100783    7444 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:17:31.100783    7444 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:17:31.100783    7444 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:17:31.100783    7444 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:17:31.101449    7444 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:17:31.101967    7444 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:17:31.102005    7444 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:17:31.102005    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:17:31.102005    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:17:31.102005    7444 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:17:31.102536    7444 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:17:31.102707    7444 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:17:31.102899    7444 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:17:31.103093    7444 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:17:31.103215    7444 kubeadm.go:319] OS: Linux
	I1216 06:17:31.104167    7444 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:17:31.104314    7444 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:17:31.104512    7444 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:17:31.105204    7444 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:17:31.105393    7444 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:17:31.105570    7444 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:17:31.105745    7444 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:17:31.115935    7444 out.go:252]   - Generating certificates and keys ...
	I1216 06:17:31.115935    7444 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:17:31.115935    7444 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:17:31.116944    7444 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:17:31.117942    7444 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:17:31.117942    7444 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:17:31.117942    7444 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:17:31.118942    7444 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:17:31.121689    7444 out.go:252]   - Booting up control plane ...
	I1216 06:17:31.121689    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:17:31.122230    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:17:31.122332    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:17:31.122517    7444 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:17:31.122517    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:17:31.122517    7444 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:17:31.123262    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:17:31.123388    7444 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:17:31.123575    7444 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:17:31.123575    7444 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:17:31.123575    7444 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000088487s
	I1216 06:17:31.123575    7444 kubeadm.go:319] 
	I1216 06:17:31.123575    7444 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:17:31.124275    7444 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:17:31.124641    7444 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:17:31.124697    7444 kubeadm.go:319] 
	I1216 06:17:31.124826    7444 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:17:31.124826    7444 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:17:31.124826    7444 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:17:31.124826    7444 kubeadm.go:319] 
	I1216 06:17:31.124826    7444 kubeadm.go:403] duration metric: took 8m4.1037503s to StartCluster
	I1216 06:17:31.125361    7444 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:17:31.129814    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:17:31.195782    7444 cri.go:89] found id: ""
	I1216 06:17:31.195840    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.195840    7444 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:17:31.195840    7444 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:17:31.201335    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:17:31.244231    7444 cri.go:89] found id: ""
	I1216 06:17:31.244282    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.244282    7444 logs.go:284] No container was found matching "etcd"
	I1216 06:17:31.244282    7444 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:17:31.248874    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:17:31.298415    7444 cri.go:89] found id: ""
	I1216 06:17:31.298503    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.298503    7444 logs.go:284] No container was found matching "coredns"
	I1216 06:17:31.298503    7444 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:17:31.303984    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:17:31.345786    7444 cri.go:89] found id: ""
	I1216 06:17:31.345786    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.345786    7444 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:17:31.345786    7444 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:17:31.350541    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:17:31.393153    7444 cri.go:89] found id: ""
	I1216 06:17:31.393153    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.393153    7444 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:17:31.393153    7444 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:17:31.400134    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:17:31.460134    7444 cri.go:89] found id: ""
	I1216 06:17:31.460134    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.460134    7444 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:17:31.460134    7444 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:17:31.465139    7444 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:17:31.523126    7444 cri.go:89] found id: ""
	I1216 06:17:31.523126    7444 logs.go:282] 0 containers: []
	W1216 06:17:31.523126    7444 logs.go:284] No container was found matching "kindnet"
	I1216 06:17:31.523126    7444 logs.go:123] Gathering logs for container status ...
	I1216 06:17:31.523126    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:17:31.599050    7444 logs.go:123] Gathering logs for kubelet ...
	I1216 06:17:31.599050    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:17:31.694219    7444 logs.go:123] Gathering logs for dmesg ...
	I1216 06:17:31.694219    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:17:31.735217    7444 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:17:31.735217    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:17:31.846635    7444 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:17:31.835022   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.836094   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.837759   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.840112   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.842430   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:17:31.835022   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.836094   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.837759   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.840112   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:31.842430   10423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:17:31.846635    7444 logs.go:123] Gathering logs for Docker ...
	I1216 06:17:31.846635    7444 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:17:31.887628    7444 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000088487s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:17:31.887628    7444 out.go:285] * 
	W1216 06:17:31.887628    7444 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000088487s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:17:31.887628    7444 out.go:285] * 
	W1216 06:17:31.890631    7444 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:17:31.902634    7444 out.go:203] 
	W1216 06:17:31.906624    7444 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000088487s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:17:31.906624    7444 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:17:31.906624    7444 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:17:31.911624    7444 out.go:203] 
	W1216 06:17:29.168071   10692 node_ready.go:57] node "custom-flannel-030800" has "Ready":"False" status (will retry)
	W1216 06:17:31.646634   10692 node_ready.go:57] node "custom-flannel-030800" has "Ready":"False" status (will retry)
	
	
	==> Docker <==
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165095582Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165188891Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165199992Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165205393Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165211193Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165233596Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165273599Z" level=info msg="Initializing buildkit"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.285487942Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.291596049Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.291751064Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.291846574Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.291875877Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:09:24 newest-cni-256200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:09:25 newest-cni-256200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:09:25 newest-cni-256200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:17:34.311185   10591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:34.312199   10591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:34.313564   10591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:34.314697   10591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:17:34.315558   10591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000002] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.218608] tmpfs: Unknown parameter 'noswap'
	[  +0.580938] CPU: 10 PID: 427118 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f9173e5bb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f9173e5baf6.
	[  +0.000001] RSP: 002b:00007ffd0785e1c0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.911345] CPU: 2 PID: 427320 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f91ebee3b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f91ebee3af6.
	[  +0.000001] RSP: 002b:00007ffe1a3884d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[ +10.361903] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 06:17:34 up  1:53,  0 user,  load average: 7.54, 5.17, 4.36
	Linux newest-cni-256200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:17:31 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:17:31 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 16 06:17:31 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:17:31 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:17:31 newest-cni-256200 kubelet[10433]: E1216 06:17:31.934844   10433 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:17:31 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:17:31 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:17:32 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 16 06:17:32 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:17:32 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:17:32 newest-cni-256200 kubelet[10447]: E1216 06:17:32.692740   10447 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:17:32 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:17:32 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:17:33 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 16 06:17:33 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:17:33 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:17:33 newest-cni-256200 kubelet[10474]: E1216 06:17:33.418006   10474 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:17:33 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:17:33 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:17:34 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 16 06:17:34 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:17:34 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:17:34 newest-cni-256200 kubelet[10553]: E1216 06:17:34.161463   10553 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:17:34 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:17:34 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200: exit status 6 (674.646ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:17:35.476538    8884 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-256200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-256200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (520.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-686300 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-686300 create -f testdata\busybox.yaml: exit status 1 (93.2993ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-686300" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-686300 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-686300
helpers_test.go:244: (dbg) docker inspect no-preload-686300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc",
	        "Created": "2025-12-16T06:04:57.603416998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320341,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:04:57.945459203Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hosts",
	        "LogPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc-json.log",
	        "Name": "/no-preload-686300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-686300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-686300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-686300",
	                "Source": "/var/lib/docker/volumes/no-preload-686300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-686300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-686300",
	                "name.minikube.sigs.k8s.io": "no-preload-686300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9eaf22c59ece58cc41ccdd6b1ffbec9338fd4c996e850e9f23f89cd055f1d4e3",
	            "SandboxKey": "/var/run/docker/netns/9eaf22c59ece",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54238"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54239"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54240"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54241"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54242"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-686300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0c88c2c552749da4ec31002e488ccc5e8184c9ce7ba360033e35a2aa2c5aead9",
	                    "EndpointID": "c09b65cdfb104f0ebd3eca48e5283746dc009186edbfa5d2e23372c6159c69c6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-686300",
	                        "f98924d46fbc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300: exit status 6 (575.2171ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:13:45.863631    8896 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-686300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25: (1.1271042s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                      │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-030800 sudo systemctl status containerd --all --full --no-pager                                        │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo systemctl cat containerd --no-pager                                                        │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo cat /lib/systemd/system/containerd.service                                                 │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo cat /etc/containerd/config.toml                                                            │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo containerd config dump                                                                     │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo systemctl status crio --all --full --no-pager                                              │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │                     │
	│ ssh     │ -p auto-030800 sudo systemctl cat crio --no-pager                                                              │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo crio config                                                                                │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ delete  │ -p auto-030800                                                                                                 │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ start   │ -p kindnet-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 pgrep -a kubelet                                                                             │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/nsswitch.conf                                                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/hosts                                                                          │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/resolv.conf                                                                    │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo crictl pods                                                                             │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo crictl ps --all                                                                         │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo ip a s                                                                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo ip r s                                                                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo iptables-save                                                                           │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo iptables -t nat -L -n -v                                                                │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl status kubelet --all --full --no-pager                                        │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl cat kubelet --no-pager                                                        │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo journalctl -xeu kubelet --all --full --no-pager                                         │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:11:49
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:11:49.340795    6788 out.go:360] Setting OutFile to fd 1712 ...
	I1216 06:11:49.386344    6788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:11:49.386344    6788 out.go:374] Setting ErrFile to fd 1196...
	I1216 06:11:49.386390    6788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:11:49.401091    6788 out.go:368] Setting JSON to false
	I1216 06:11:49.404855    6788 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6531,"bootTime":1765858978,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:11:49.405055    6788 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:11:49.408997    6788 out.go:179] * [kindnet-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:11:49.412763    6788 notify.go:221] Checking for updates...
	I1216 06:11:49.414957    6788 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:11:49.416858    6788 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:11:49.419397    6788 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:11:49.421529    6788 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:11:49.423543    6788 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:11:49.426393    6788 config.go:182] Loaded profile config "kubernetes-upgrade-633300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:11:49.427388    6788 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:11:49.427640    6788 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:11:49.428138    6788 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:11:49.549056    6788 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:11:49.552567    6788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:11:49.779179    6788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:11:49.756494835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:11:49.782904    6788 out.go:179] * Using the docker driver based on user configuration
	I1216 06:11:49.786690    6788 start.go:309] selected driver: docker
	I1216 06:11:49.786719    6788 start.go:927] validating driver "docker" against <nil>
	I1216 06:11:49.786755    6788 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:11:49.871381    6788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:11:50.104061    6788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:11:50.077311907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:11:50.105056    6788 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:11:50.105056    6788 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:11:50.108056    6788 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:11:50.110058    6788 cni.go:84] Creating CNI manager for "kindnet"
	I1216 06:11:50.110058    6788 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 06:11:50.110058    6788 start.go:353] cluster config:
	{Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:11:50.112053    6788 out.go:179] * Starting "kindnet-030800" primary control-plane node in "kindnet-030800" cluster
	I1216 06:11:50.115067    6788 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:11:50.118075    6788 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:11:50.120078    6788 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:11:50.120078    6788 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:11:50.120078    6788 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:11:50.120078    6788 cache.go:65] Caching tarball of preloaded images
	I1216 06:11:50.120078    6788 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:11:50.121072    6788 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:11:50.121072    6788 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\config.json ...
	I1216 06:11:50.121072    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\config.json: {Name:mkebea825fd6dc6adf01534f5a4bb9848abba58a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:11:50.198067    6788 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:11:50.198067    6788 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:11:50.198067    6788 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:11:50.198067    6788 start.go:360] acquireMachinesLock for kindnet-030800: {Name:mk13b4d023e9ef7970ce337d36b9fc70162bc2d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:11:50.198067    6788 start.go:364] duration metric: took 0s to acquireMachinesLock for "kindnet-030800"
	I1216 06:11:50.198067    6788 start.go:93] Provisioning new machine with config: &{Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:11:50.199067    6788 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:11:50.202064    6788 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:11:50.202064    6788 start.go:159] libmachine.API.Create for "kindnet-030800" (driver="docker")
	I1216 06:11:50.202064    6788 client.go:173] LocalClient.Create starting
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Decoding PEM data...
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Parsing certificate...
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Decoding PEM data...
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Parsing certificate...
	I1216 06:11:50.208057    6788 cli_runner.go:164] Run: docker network inspect kindnet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:11:50.256055    6788 cli_runner.go:211] docker network inspect kindnet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:11:50.259055    6788 network_create.go:284] running [docker network inspect kindnet-030800] to gather additional debugging logs...
	I1216 06:11:50.259055    6788 cli_runner.go:164] Run: docker network inspect kindnet-030800
	W1216 06:11:50.314050    6788 cli_runner.go:211] docker network inspect kindnet-030800 returned with exit code 1
	I1216 06:11:50.314050    6788 network_create.go:287] error running [docker network inspect kindnet-030800]: docker network inspect kindnet-030800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-030800 not found
	I1216 06:11:50.314050    6788 network_create.go:289] output of [docker network inspect kindnet-030800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-030800 not found
	
	** /stderr **
	I1216 06:11:50.318205    6788 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:11:50.407244    6788 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.423243    6788 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.439260    6788 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.454418    6788 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.470404    6788 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.485782    6788 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.499864    6788 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001585680}
	I1216 06:11:50.499864    6788 network_create.go:124] attempt to create docker network kindnet-030800 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1216 06:11:50.504590    6788 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-030800 kindnet-030800
	I1216 06:11:50.647049    6788 network_create.go:108] docker network kindnet-030800 192.168.103.0/24 created
	I1216 06:11:50.647049    6788 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-030800" container
	I1216 06:11:50.655126    6788 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:11:50.718220    6788 cli_runner.go:164] Run: docker volume create kindnet-030800 --label name.minikube.sigs.k8s.io=kindnet-030800 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:11:50.775893    6788 oci.go:103] Successfully created a docker volume kindnet-030800
	I1216 06:11:50.779320    6788 cli_runner.go:164] Run: docker run --rm --name kindnet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-030800 --entrypoint /usr/bin/test -v kindnet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:11:52.174069    6788 cli_runner.go:217] Completed: docker run --rm --name kindnet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-030800 --entrypoint /usr/bin/test -v kindnet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.3947303s)
	I1216 06:11:52.174069    6788 oci.go:107] Successfully prepared a docker volume kindnet-030800
	I1216 06:11:52.174069    6788 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:11:52.174069    6788 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:11:52.177694    6788 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:12:02.114874   11368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:12:02.115036   11368 kubeadm.go:319] 
	I1216 06:12:02.115323   11368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:12:02.119332   11368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:12:02.119332   11368 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:12:02.120135   11368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:12:02.120135   11368 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:12:02.120135   11368 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:12:02.120871   11368 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:12:02.121013   11368 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:12:02.121192   11368 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:12:02.122017   11368 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:12:02.122194   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:12:02.122408   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:12:02.122510   11368 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:12:02.122753   11368 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:12:02.122840   11368 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:12:02.123033   11368 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:12:02.123163   11368 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:12:02.123310   11368 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:12:02.123421   11368 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:12:02.123572   11368 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:12:02.123980   11368 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:12:02.124094   11368 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] OS: Linux
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:12:02.124933   11368 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:12:02.125112   11368 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:12:02.125304   11368 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:12:02.125449   11368 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:12:02.125567   11368 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:12:02.125730   11368 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:12:02.125853   11368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:12:02.125853   11368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:12:02.126387   11368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:12:02.126558   11368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:12:02.407594   11368 out.go:252]   - Generating certificates and keys ...
	I1216 06:12:02.407968   11368 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:12:02.408113   11368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:12:02.408288   11368 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:12:02.408453   11368 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:12:02.408673   11368 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:12:02.408815   11368 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:12:02.408921   11368 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:12:02.409054   11368 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:12:02.409210   11368 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:12:02.409444   11368 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:12:02.409514   11368 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:12:02.409673   11368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:12:02.409749   11368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:12:02.409903   11368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:12:02.410062   11368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:12:02.410138   11368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:12:02.410298   11368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:12:02.410526   11368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:12:02.410600   11368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:12:02.453808   11368 out.go:252]   - Booting up control plane ...
	I1216 06:12:02.454792   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:12:02.455026   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:12:02.455098   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:12:02.455292   11368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:12:02.455588   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:12:02.455804   11368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:12:02.455984   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:12:02.456047   11368 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:12:02.456475   11368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:12:02.456689   11368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:12:02.456759   11368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000829212s
	I1216 06:12:02.456833   11368 kubeadm.go:319] 
	I1216 06:12:02.456918   11368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:12:02.457018   11368 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:12:02.457186   11368 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:12:02.457264   11368 kubeadm.go:319] 
	I1216 06:12:02.457466   11368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:12:02.457538   11368 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:12:02.457617   11368 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:12:02.457681   11368 kubeadm.go:319] 
	W1216 06:12:02.457840   11368 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829212s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:12:02.460957   11368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 06:12:02.923334   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:12:02.942284   11368 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:12:02.947934   11368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:12:02.960033   11368 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:12:02.960033   11368 kubeadm.go:158] found existing configuration files:
	
	I1216 06:12:02.963699   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:12:02.976249   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:12:02.980398   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:12:02.996745   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:12:03.010587   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:12:03.014857   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:12:03.033804   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:12:03.047258   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:12:03.052529   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:12:03.071112   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:12:03.084411   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:12:03.089634   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:12:03.107865   11368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:12:03.217980   11368 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:12:03.304403   11368 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:12:03.402507   11368 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:12:07.002051    6788 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (14.8240354s)
	I1216 06:12:07.002137    6788 kic.go:203] duration metric: took 14.8278391s to extract preloaded images to volume ...
	I1216 06:12:07.005779    6788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:12:07.230944    6788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:12:07.212321642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:12:07.234947    6788 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:12:07.472678    6788 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-030800 --name kindnet-030800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-030800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-030800 --network kindnet-030800 --ip 192.168.103.2 --volume kindnet-030800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:12:08.105890    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Running}}
	I1216 06:12:08.171938    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:08.232928    6788 cli_runner.go:164] Run: docker exec kindnet-030800 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:12:08.343285    6788 oci.go:144] the created container "kindnet-030800" has a running status.
	I1216 06:12:08.343285    6788 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa...
	I1216 06:12:08.510838    6788 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:12:08.587450    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:08.650452    6788 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:12:08.650452    6788 kic_runner.go:114] Args: [docker exec --privileged kindnet-030800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:12:08.809196    6788 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa...
	I1216 06:12:10.890772    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:10.954521    6788 machine.go:94] provisionDockerMachine start ...
	I1216 06:12:10.957521    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.008520    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:11.023115    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:11.023115    6788 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:12:11.199297    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-030800
	
	I1216 06:12:11.199297    6788 ubuntu.go:182] provisioning hostname "kindnet-030800"
	I1216 06:12:11.202294    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.259757    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:11.259806    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:11.259806    6788 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-030800 && echo "kindnet-030800" | sudo tee /etc/hostname
	I1216 06:12:11.458451    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-030800
	
	I1216 06:12:11.461723    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.518816    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:11.519151    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:11.519151    6788 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-030800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-030800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-030800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:12:11.682075    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:12:11.682075    6788 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:12:11.682075    6788 ubuntu.go:190] setting up certificates
	I1216 06:12:11.682075    6788 provision.go:84] configureAuth start
	I1216 06:12:11.685801    6788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-030800
	I1216 06:12:11.740638    6788 provision.go:143] copyHostCerts
	I1216 06:12:11.741639    6788 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:12:11.741639    6788 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:12:11.741639    6788 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:12:11.742643    6788 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:12:11.742643    6788 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:12:11.742643    6788 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:12:11.743641    6788 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:12:11.743641    6788 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:12:11.743641    6788 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:12:11.744645    6788 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-030800 san=[127.0.0.1 192.168.103.2 kindnet-030800 localhost minikube]
	I1216 06:12:11.931347    6788 provision.go:177] copyRemoteCerts
	I1216 06:12:11.935348    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:12:11.939351    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.996758    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:12.128806    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:12:12.157528    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 06:12:12.184855    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:12:12.209875    6788 provision.go:87] duration metric: took 527.7927ms to configureAuth
	I1216 06:12:12.209875    6788 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:12:12.209875    6788 config.go:182] Loaded profile config "kindnet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:12:12.214435    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:12.270503    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:12.270548    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:12.270548    6788 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:12:12.443739    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:12:12.443821    6788 ubuntu.go:71] root file system type: overlay
	I1216 06:12:12.443969    6788 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:12:12.447696    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:12.505748    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:12.505780    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:12.505780    6788 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:12:12.696827    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:12:12.700867    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:12.760030    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:12.760715    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:12.760715    6788 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:12:14.220671    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:12:12.685444205 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:12:14.220671    6788 machine.go:97] duration metric: took 3.2661054s to provisionDockerMachine
	I1216 06:12:14.220671    6788 client.go:176] duration metric: took 24.0182853s to LocalClient.Create
	I1216 06:12:14.220671    6788 start.go:167] duration metric: took 24.0182853s to libmachine.API.Create "kindnet-030800"
	I1216 06:12:14.220671    6788 start.go:293] postStartSetup for "kindnet-030800" (driver="docker")
	I1216 06:12:14.220671    6788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:12:14.225965    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:12:14.228654    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.286730    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:14.422175    6788 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:12:14.430679    6788 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:12:14.430679    6788 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:12:14.430679    6788 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:12:14.430679    6788 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:12:14.431304    6788 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:12:14.436062    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:12:14.447557    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:12:14.476598    6788 start.go:296] duration metric: took 255.9237ms for postStartSetup
	I1216 06:12:14.481857    6788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-030800
	I1216 06:12:14.534874    6788 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\config.json ...
	I1216 06:12:14.540932    6788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:12:14.544163    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.599153    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:14.738099    6788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:12:14.756903    6788 start.go:128] duration metric: took 24.5575075s to createHost
	I1216 06:12:14.756964    6788 start.go:83] releasing machines lock for "kindnet-030800", held for 24.5585685s
	I1216 06:12:14.761089    6788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-030800
	I1216 06:12:14.820995    6788 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:12:14.825383    6788 ssh_runner.go:195] Run: cat /version.json
	I1216 06:12:14.825455    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.828473    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.882924    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:14.883920    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:15.008272    6788 ssh_runner.go:195] Run: systemctl --version
	W1216 06:12:15.008961    6788 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:12:15.024976    6788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:12:15.035099    6788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:12:15.039160    6788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:12:15.088926    6788 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:12:15.089002    6788 start.go:496] detecting cgroup driver to use...
	I1216 06:12:15.089002    6788 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:12:15.089195    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1216 06:12:15.115148    6788 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:12:15.115148    6788 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:12:15.116205    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 06:12:15.133999    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:12:15.148544    6788 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:12:15.153402    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:12:15.173763    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:12:15.193174    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:12:15.211967    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:12:15.230814    6788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:12:15.248897    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:12:15.268590    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:12:15.286801    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:12:15.305083    6788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:12:15.323613    6788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:12:15.340787    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:15.499010    6788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:12:15.663518    6788 start.go:496] detecting cgroup driver to use...
	I1216 06:12:15.663548    6788 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:12:15.670359    6788 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:12:15.699486    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:12:15.720065    6788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:12:15.794660    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:12:15.815487    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:12:15.833957    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:12:15.857975    6788 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:12:15.872465    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:12:15.883658    6788 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 06:12:15.905854    6788 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:12:16.059572    6788 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:12:16.183220    6788 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:12:16.183220    6788 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:12:16.206253    6788 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:12:16.226683    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:16.363066    6788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:12:17.209602    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:12:17.234418    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:12:17.256030    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:12:17.281172    6788 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:12:17.429442    6788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:12:17.579817    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:17.730956    6788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:12:17.755884    6788 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:12:17.777180    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:17.927172    6788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:12:18.030003    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:12:18.048766    6788 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:12:18.055532    6788 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:12:18.064014    6788 start.go:564] Will wait 60s for crictl version
	I1216 06:12:18.069369    6788 ssh_runner.go:195] Run: which crictl
	I1216 06:12:18.080342    6788 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:12:18.125849    6788 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:12:18.129056    6788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:12:18.171478    6788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:12:18.208246    6788 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:12:18.212058    6788 cli_runner.go:164] Run: docker exec -t kindnet-030800 dig +short host.docker.internal
	I1216 06:12:18.346525    6788 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:12:18.351179    6788 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:12:18.360150    6788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:12:18.377467    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:18.431980    6788 kubeadm.go:884] updating cluster {Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:12:18.432155    6788 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:12:18.435467    6788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:12:18.470599    6788 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:12:18.470599    6788 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:12:18.474251    6788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:12:18.502607    6788 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:12:18.502607    6788 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:12:18.502607    6788 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 docker true true} ...
	I1216 06:12:18.502607    6788 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1216 06:12:18.506388    6788 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:12:18.578689    6788 cni.go:84] Creating CNI manager for "kindnet"
	I1216 06:12:18.578689    6788 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:12:18.578689    6788 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-030800 NodeName:kindnet-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:12:18.579341    6788 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kindnet-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:12:18.585628    6788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:12:18.597522    6788 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:12:18.601494    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:12:18.615009    6788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1216 06:12:18.637536    6788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:12:18.658037    6788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 06:12:18.688118    6788 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:12:18.695892    6788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:12:18.714307    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:18.850314    6788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:12:18.871857    6788 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800 for IP: 192.168.103.2
	I1216 06:12:18.871857    6788 certs.go:195] generating shared ca certs ...
	I1216 06:12:18.871857    6788 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:18.872460    6788 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:12:18.872580    6788 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:12:18.872580    6788 certs.go:257] generating profile certs ...
	I1216 06:12:18.873200    6788 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.key
	I1216 06:12:18.873250    6788 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.crt with IP's: []
	I1216 06:12:18.949253    6788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.crt ...
	I1216 06:12:18.949253    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.crt: {Name:mkf410fba892917bdd522929abe867e46494e3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:18.950237    6788 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.key ...
	I1216 06:12:18.950237    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.key: {Name:mkf29080c46ee2c14c10a21eb67c9cc815f21e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:18.951309    6788 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf
	I1216 06:12:18.951403    6788 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 06:12:19.114614    6788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf ...
	I1216 06:12:19.114614    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf: {Name:mkb55c42e33a2ae7870887e58b6e05f71dd4daf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.115619    6788 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf ...
	I1216 06:12:19.115619    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf: {Name:mk19d54f554eb9aa8025289f18eb07425aa3fc90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.116906    6788 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt
	I1216 06:12:19.131178    6788 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key
	I1216 06:12:19.132179    6788 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key
	I1216 06:12:19.132179    6788 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt with IP's: []
	I1216 06:12:19.184770    6788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt ...
	I1216 06:12:19.184770    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt: {Name:mkde61e113e82c5dc4f7e40e38dd7355210b095d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.185771    6788 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key ...
	I1216 06:12:19.185771    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key: {Name:mk44855c783f1633070400559fd3d672d6875e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.200773    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:12:19.200773    6788 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:12:19.201509    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:12:19.201643    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:12:19.201822    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:12:19.201993    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:12:19.202166    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:12:19.202444    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:12:19.237351    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:12:19.262028    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:12:19.287983    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:12:19.314234    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:12:19.339105    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:12:19.364652    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:12:19.396531    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:12:19.427432    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:12:19.459712    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:12:19.482706    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:12:19.510753    6788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:12:19.533021    6788 ssh_runner.go:195] Run: openssl version
	I1216 06:12:19.551437    6788 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.569271    6788 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:12:19.590136    6788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.598267    6788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.602512    6788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.651072    6788 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:12:19.666426    6788 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:12:19.681980    6788 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.696016    6788 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:12:19.714282    6788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.721158    6788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.725233    6788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.774540    6788 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:12:19.793803    6788 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:12:19.810823    6788 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.827895    6788 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:12:19.844802    6788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.853541    6788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.857849    6788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.905009    6788 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:12:19.921560    6788 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:12:19.939199    6788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:12:19.947504    6788 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:12:19.947719    6788 kubeadm.go:401] StartCluster: {Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:12:19.950360    6788 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:12:19.983797    6788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:12:20.000670    6788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:12:20.014790    6788 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:12:20.018800    6788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:12:20.032572    6788 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:12:20.032616    6788 kubeadm.go:158] found existing configuration files:
	
	I1216 06:12:20.036680    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:12:20.049905    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:12:20.054058    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:12:20.071603    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:12:20.085088    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:12:20.089085    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:12:20.106513    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:12:20.118805    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:12:20.122049    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:12:20.142303    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:12:20.154293    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:12:20.158297    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:12:20.174303    6788 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:12:20.296404    6788 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:12:20.301548    6788 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:12:20.397661    6788 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:12:33.968529    6788 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:12:33.968529    6788 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:12:33.968529    6788 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:12:33.969389    6788 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:12:33.969607    6788 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:12:33.969607    6788 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:12:33.972873    6788 out.go:252]   - Generating certificates and keys ...
	I1216 06:12:33.972873    6788 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:12:33.972873    6788 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:12:33.973434    6788 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:12:33.975209    6788 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:12:33.975828    6788 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:12:33.975933    6788 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:12:33.975933    6788 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:12:33.975933    6788 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:12:33.980556    6788 out.go:252]   - Booting up control plane ...
	I1216 06:12:33.980556    6788 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:12:33.981078    6788 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:12:33.981180    6788 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:12:33.981180    6788 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:12:33.981180    6788 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:12:33.981825    6788 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:12:33.981911    6788 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:12:33.981911    6788 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:12:33.981911    6788 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:12:33.982502    6788 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:12:33.982549    6788 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 503.041935ms
	I1216 06:12:33.982549    6788 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:12:33.982549    6788 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1216 06:12:33.982549    6788 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.898426957s
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.853187439s
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502086413s
	I1216 06:12:33.983821    6788 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:12:33.983995    6788 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:12:33.983995    6788 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:12:33.984608    6788 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:12:33.984704    6788 kubeadm.go:319] [bootstrap-token] Using token: xj3a70.p80jdqi9w7ogff39
	I1216 06:12:33.994781    6788 out.go:252]   - Configuring RBAC rules ...
	I1216 06:12:33.994781    6788 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:12:33.994781    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:12:33.994781    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:12:33.995784    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:12:33.995784    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:12:33.995784    6788 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:12:33.995784    6788 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:12:33.995784    6788 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:12:33.995784    6788 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:12:33.995784    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:12:33.996786    6788 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:12:33.996786    6788 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:12:33.996786    6788 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:12:33.997793    6788 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:12:33.997793    6788 kubeadm.go:319] 
	I1216 06:12:33.997912    6788 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:12:33.997912    6788 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:12:33.997912    6788 kubeadm.go:319] 
	I1216 06:12:33.997912    6788 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xj3a70.p80jdqi9w7ogff39 \
	I1216 06:12:33.998463    6788 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:12:33.998463    6788 kubeadm.go:319] 	--control-plane 
	I1216 06:12:33.998463    6788 kubeadm.go:319] 
	I1216 06:12:33.998463    6788 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:12:33.998463    6788 kubeadm.go:319] 
	I1216 06:12:33.998463    6788 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xj3a70.p80jdqi9w7ogff39 \
	I1216 06:12:33.999035    6788 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:12:33.999035    6788 cni.go:84] Creating CNI manager for "kindnet"
	I1216 06:12:34.001665    6788 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 06:12:34.007658    6788 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 06:12:34.019612    6788 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 06:12:34.019612    6788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 06:12:34.041663    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 06:12:34.320470    6788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:12:34.325898    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:34.325972    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-030800 minikube.k8s.io/updated_at=2025_12_16T06_12_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=kindnet-030800 minikube.k8s.io/primary=true
	I1216 06:12:34.337113    6788 ops.go:34] apiserver oom_adj: -16
	I1216 06:12:34.446144    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:34.947933    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:35.448308    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:35.947898    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:36.447700    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:36.946927    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:37.445777    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:37.947107    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:38.447683    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:38.538781    6788 kubeadm.go:1114] duration metric: took 4.2182542s to wait for elevateKubeSystemPrivileges
	I1216 06:12:38.538869    6788 kubeadm.go:403] duration metric: took 18.5909004s to StartCluster
	I1216 06:12:38.538924    6788 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:38.538924    6788 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:12:38.540348    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:38.541592    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:12:38.541592    6788 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:12:38.541543    6788 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:12:38.541748    6788 addons.go:70] Setting storage-provisioner=true in profile "kindnet-030800"
	I1216 06:12:38.541780    6788 addons.go:239] Setting addon storage-provisioner=true in "kindnet-030800"
	I1216 06:12:38.541927    6788 host.go:66] Checking if "kindnet-030800" exists ...
	I1216 06:12:38.541927    6788 addons.go:70] Setting default-storageclass=true in profile "kindnet-030800"
	I1216 06:12:38.541927    6788 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-030800"
	I1216 06:12:38.541927    6788 config.go:182] Loaded profile config "kindnet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:12:38.544488    6788 out.go:179] * Verifying Kubernetes components...
	I1216 06:12:38.550892    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:38.550892    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:38.553045    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:38.611835    6788 addons.go:239] Setting addon default-storageclass=true in "kindnet-030800"
	I1216 06:12:38.611835    6788 host.go:66] Checking if "kindnet-030800" exists ...
	I1216 06:12:38.612829    6788 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:12:38.615828    6788 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:12:38.615828    6788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:12:38.618830    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:38.619830    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:38.670833    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:38.671832    6788 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:12:38.671832    6788 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:12:38.674835    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:38.728830    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:38.786244    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:12:38.993052    6788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:12:39.294182    6788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:12:39.393642    6788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:12:39.901700    6788 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.1154417s)
	I1216 06:12:39.901700    6788 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:12:40.331433    6788 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.0372375s)
	I1216 06:12:40.331433    6788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3383629s)
	I1216 06:12:40.335284    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:40.389545    6788 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:12:40.393549    6788 addons.go:530] duration metric: took 1.8519322s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:12:40.400560    6788 node_ready.go:35] waiting up to 15m0s for node "kindnet-030800" to be "Ready" ...
	I1216 06:12:40.413561    6788 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-030800" context rescaled to 1 replicas
	W1216 06:12:42.406617    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:44.907499    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:47.406803    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:49.908547    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:52.408158    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:54.907731    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:56.908056    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:59.407002    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:13:01.407755    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	I1216 06:13:03.406403    6788 node_ready.go:49] node "kindnet-030800" is "Ready"
	I1216 06:13:03.406463    6788 node_ready.go:38] duration metric: took 23.0055942s for node "kindnet-030800" to be "Ready" ...
	I1216 06:13:03.406495    6788 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:13:03.411466    6788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:13:03.430701    6788 api_server.go:72] duration metric: took 24.8886193s to wait for apiserver process to appear ...
	I1216 06:13:03.430701    6788 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:13:03.430701    6788 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54866/healthz ...
	I1216 06:13:03.440994    6788 api_server.go:279] https://127.0.0.1:54866/healthz returned 200:
	ok
	I1216 06:13:03.443640    6788 api_server.go:141] control plane version: v1.34.2
	I1216 06:13:03.443640    6788 api_server.go:131] duration metric: took 12.9387ms to wait for apiserver health ...
	I1216 06:13:03.443640    6788 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:13:03.449411    6788 system_pods.go:59] 8 kube-system pods found
	I1216 06:13:03.449411    6788 system_pods.go:61] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.449411    6788 system_pods.go:61] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.449411    6788 system_pods.go:74] duration metric: took 5.7708ms to wait for pod list to return data ...
	I1216 06:13:03.449411    6788 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:13:03.454158    6788 default_sa.go:45] found service account: "default"
	I1216 06:13:03.454158    6788 default_sa.go:55] duration metric: took 4.7472ms for default service account to be created ...
	I1216 06:13:03.454158    6788 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:13:03.462563    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:03.462563    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.462563    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.462563    6788 retry.go:31] will retry after 200.474088ms: missing components: kube-dns
	I1216 06:13:03.671143    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:03.671143    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.671143    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.671143    6788 retry.go:31] will retry after 243.807956ms: missing components: kube-dns
	I1216 06:13:03.922250    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:03.922250    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.922250    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.922250    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.922250    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.922340    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.922340    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.922340    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.922374    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.922374    6788 retry.go:31] will retry after 406.562398ms: missing components: kube-dns
	I1216 06:13:04.338229    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:04.338229    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:04.338229    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:04.338820    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:04.338820    6788 retry.go:31] will retry after 404.864087ms: missing components: kube-dns
	I1216 06:13:04.751475    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:04.751475    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:04.751475    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:04.751475    6788 retry.go:31] will retry after 580.937637ms: missing components: kube-dns
	I1216 06:13:05.340705    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:05.340705    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Running
	I1216 06:13:05.340705    6788 system_pods.go:126] duration metric: took 1.8865217s to wait for k8s-apps to be running ...
	I1216 06:13:05.340705    6788 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:13:05.345162    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:13:05.363995    6788 system_svc.go:56] duration metric: took 23.2385ms WaitForService to wait for kubelet
	I1216 06:13:05.364042    6788 kubeadm.go:587] duration metric: took 26.8218872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:13:05.364042    6788 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:13:05.368328    6788 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:13:05.368328    6788 node_conditions.go:123] node cpu capacity is 16
	I1216 06:13:05.368328    6788 node_conditions.go:105] duration metric: took 4.2856ms to run NodePressure ...
	I1216 06:13:05.368328    6788 start.go:242] waiting for startup goroutines ...
	I1216 06:13:05.368328    6788 start.go:247] waiting for cluster config update ...
	I1216 06:13:05.368328    6788 start.go:256] writing updated cluster config ...
	I1216 06:13:05.373800    6788 ssh_runner.go:195] Run: rm -f paused
	I1216 06:13:05.381487    6788 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:13:05.388287    6788 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2klg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.395940    6788 pod_ready.go:94] pod "coredns-66bc5c9577-2klg5" is "Ready"
	I1216 06:13:05.395940    6788 pod_ready.go:86] duration metric: took 7.6527ms for pod "coredns-66bc5c9577-2klg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.402352    6788 pod_ready.go:83] waiting for pod "etcd-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.409558    6788 pod_ready.go:94] pod "etcd-kindnet-030800" is "Ready"
	I1216 06:13:05.409558    6788 pod_ready.go:86] duration metric: took 7.2054ms for pod "etcd-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.413805    6788 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.423218    6788 pod_ready.go:94] pod "kube-apiserver-kindnet-030800" is "Ready"
	I1216 06:13:05.423218    6788 pod_ready.go:86] duration metric: took 9.4134ms for pod "kube-apiserver-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.426944    6788 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.790782    6788 pod_ready.go:94] pod "kube-controller-manager-kindnet-030800" is "Ready"
	I1216 06:13:05.790782    6788 pod_ready.go:86] duration metric: took 363.8334ms for pod "kube-controller-manager-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.989561    6788 pod_ready.go:83] waiting for pod "kube-proxy-w78wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.398538    6788 pod_ready.go:94] pod "kube-proxy-w78wd" is "Ready"
	I1216 06:13:06.398538    6788 pod_ready.go:86] duration metric: took 408.972ms for pod "kube-proxy-w78wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.590868    6788 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.989680    6788 pod_ready.go:94] pod "kube-scheduler-kindnet-030800" is "Ready"
	I1216 06:13:06.989680    6788 pod_ready.go:86] duration metric: took 398.2881ms for pod "kube-scheduler-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.989680    6788 pod_ready.go:40] duration metric: took 1.6081714s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:13:07.082864    6788 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:13:07.089654    6788 out.go:179] * Done! kubectl is now configured to use "kindnet-030800" cluster and "default" namespace by default
	I1216 06:13:29.437822    7444 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:13:29.437822    7444 kubeadm.go:319] 
	I1216 06:13:29.438345    7444 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:13:29.442203    7444 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:13:29.442288    7444 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:13:29.442391    7444 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:13:29.442422    7444 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:13:29.442532    7444 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:13:29.442639    7444 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:13:29.442697    7444 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:13:29.443354    7444 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:13:29.443491    7444 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:13:29.444385    7444 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:13:29.444385    7444 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:13:29.444615    7444 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:13:29.445371    7444 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:13:29.445501    7444 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:13:29.445583    7444 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:13:29.445630    7444 kubeadm.go:319] OS: Linux
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:13:29.446464    7444 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:13:29.447176    7444 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:13:29.447176    7444 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:13:29.447176    7444 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:13:29.451165    7444 out.go:252]   - Generating certificates and keys ...
	I1216 06:13:29.451165    7444 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:13:29.451165    7444 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:13:29.453414    7444 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:13:29.453588    7444 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:13:29.453727    7444 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:13:29.453727    7444 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:13:29.453727    7444 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:13:29.457212    7444 out.go:252]   - Booting up control plane ...
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:13:29.457981    7444 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:13:29.458269    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:13:29.458458    7444 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:13:29.459071    7444 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:13:29.459187    7444 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.0010934s
	I1216 06:13:29.459234    7444 kubeadm.go:319] 
	I1216 06:13:29.459234    7444 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:13:29.459234    7444 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:13:29.459234    7444 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:13:29.459234    7444 kubeadm.go:319] 
	I1216 06:13:29.459809    7444 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:13:29.459809    7444 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:13:29.459809    7444 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:13:29.459809    7444 kubeadm.go:319] 
	W1216 06:13:29.459809    7444 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.0010934s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:13:29.463847    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 06:13:29.953578    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:13:29.979536    7444 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:13:29.985016    7444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:13:29.996493    7444 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:13:29.996493    7444 kubeadm.go:158] found existing configuration files:
	
	I1216 06:13:30.000490    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:13:30.012501    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:13:30.016488    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:13:30.031492    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:13:30.042509    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:13:30.046490    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:13:30.066672    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:13:30.081178    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:13:30.085494    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:13:30.103106    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:13:30.115159    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:13:30.119152    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:13:30.134150    7444 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:13:30.260471    7444 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:13:30.351419    7444 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:13:30.450039    7444 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:13:41.144775    1840 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:13:41.144775    1840 kubeadm.go:319] 
	I1216 06:13:41.144775    1840 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:13:41.148846    1840 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:13:41.149531    1840 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:13:41.149956    1840 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:13:41.150211    1840 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:13:41.150759    1840 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:13:41.150889    1840 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:13:41.151079    1840 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:13:41.151275    1840 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:13:41.151526    1840 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:13:41.151790    1840 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:13:41.153311    1840 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:13:41.153615    1840 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:13:41.153787    1840 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:13:41.154024    1840 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] OS: Linux
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:13:41.154727    1840 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:13:41.155306    1840 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:13:41.156052    1840 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:13:41.158898    1840 out.go:252]   - Generating certificates and keys ...
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:13:41.159722    1840 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:13:41.159918    1840 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:13:41.160046    1840 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:13:41.160705    1840 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:13:41.160782    1840 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:13:41.160887    1840 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:13:41.161622    1840 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:13:41.161622    1840 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:13:41.164114    1840 out.go:252]   - Booting up control plane ...
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:13:41.166093    1840 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:13:41.166093    1840 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:13:41.166093    1840 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000506958s
	I1216 06:13:41.166093    1840 kubeadm.go:319] 
	I1216 06:13:41.166093    1840 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:13:41.166093    1840 kubeadm.go:319] 
	I1216 06:13:41.166093    1840 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:13:41.167095    1840 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:13:41.167095    1840 kubeadm.go:319] 
	I1216 06:13:41.167095    1840 kubeadm.go:403] duration metric: took 8m4.2111844s to StartCluster
	I1216 06:13:41.167095    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:13:41.170749    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:13:41.232071    1840 cri.go:89] found id: ""
	I1216 06:13:41.232103    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.232153    1840 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:13:41.232153    1840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:13:41.237864    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:13:41.286666    1840 cri.go:89] found id: ""
	I1216 06:13:41.286666    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.286666    1840 logs.go:284] No container was found matching "etcd"
	I1216 06:13:41.286666    1840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:13:41.291424    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:13:41.333354    1840 cri.go:89] found id: ""
	I1216 06:13:41.333354    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.333354    1840 logs.go:284] No container was found matching "coredns"
	I1216 06:13:41.333354    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:13:41.337361    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:13:41.379362    1840 cri.go:89] found id: ""
	I1216 06:13:41.379362    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.379362    1840 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:13:41.379362    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:13:41.383354    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:13:41.434935    1840 cri.go:89] found id: ""
	I1216 06:13:41.434935    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.434935    1840 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:13:41.434935    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:13:41.438925    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:13:41.481929    1840 cri.go:89] found id: ""
	I1216 06:13:41.481929    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.481929    1840 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:13:41.481929    1840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:13:41.485920    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:13:41.530524    1840 cri.go:89] found id: ""
	I1216 06:13:41.530614    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.530614    1840 logs.go:284] No container was found matching "kindnet"
	I1216 06:13:41.530666    1840 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:13:41.530666    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:13:41.626225    1840 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:13:41.619073   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.620556   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.621534   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.622645   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.623449   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:13:41.619073   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.620556   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.621534   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.622645   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.623449   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:13:41.626225    1840 logs.go:123] Gathering logs for Docker ...
	I1216 06:13:41.626225    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:13:41.658338    1840 logs.go:123] Gathering logs for container status ...
	I1216 06:13:41.658338    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:13:41.703328    1840 logs.go:123] Gathering logs for kubelet ...
	I1216 06:13:41.703328    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:13:41.762322    1840 logs.go:123] Gathering logs for dmesg ...
	I1216 06:13:41.762322    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 06:13:41.799388    1840 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:13:41.799388    1840 out.go:285] * 
	W1216 06:13:41.799388    1840 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:13:41.799388    1840 out.go:285] * 
	W1216 06:13:41.801787    1840 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:13:41.811220    1840 out.go:203] 
	W1216 06:13:41.815157    1840 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:13:41.815157    1840 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:13:41.815157    1840 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:13:41.817851    1840 out.go:203] 
	
	
	==> Docker <==
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402735317Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402828927Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402844429Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402852530Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402861131Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402891834Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402934238Z" level=info msg="Initializing buildkit"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.580612363Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.589812059Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590000679Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590040684Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590028382Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:05:08 no-preload-686300 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:05:09 no-preload-686300 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:05:09 no-preload-686300 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:13:46.902267   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:46.903666   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:46.905027   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:46.907300   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:46.908371   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.809571] CPU: 0 PID: 390218 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f8788dabb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f8788dabaf6.
	[  +0.000001] RSP: 002b:00007ffd609e6e50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.827622] CPU: 14 PID: 390383 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fddca31bb20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7fddca31baf6.
	[  +0.000001] RSP: 002b:00007ffcdf5a88f0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +7.540385] tmpfs: Unknown parameter 'noswap'
	[  +9.462694] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 06:13:46 up  1:50,  0 user,  load average: 2.53, 4.03, 3.95
	Linux no-preload-686300 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:13:43 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:44 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 16 06:13:44 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:44 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:44 no-preload-686300 kubelet[11154]: E1216 06:13:44.199439   11154 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:44 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:44 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:44 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 16 06:13:44 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:44 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:44 no-preload-686300 kubelet[11168]: E1216 06:13:44.933919   11168 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:44 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:44 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:45 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 326.
	Dec 16 06:13:45 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:45 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:45 no-preload-686300 kubelet[11196]: E1216 06:13:45.703179   11196 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:45 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:45 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:46 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 327.
	Dec 16 06:13:46 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:46 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:46 no-preload-686300 kubelet[11225]: E1216 06:13:46.436250   11225 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:46 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:46 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300: exit status 6 (575.2714ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:13:48.004915    7484 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-686300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-686300" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-686300
helpers_test.go:244: (dbg) docker inspect no-preload-686300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc",
	        "Created": "2025-12-16T06:04:57.603416998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320341,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:04:57.945459203Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hosts",
	        "LogPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc-json.log",
	        "Name": "/no-preload-686300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-686300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-686300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-686300",
	                "Source": "/var/lib/docker/volumes/no-preload-686300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-686300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-686300",
	                "name.minikube.sigs.k8s.io": "no-preload-686300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9eaf22c59ece58cc41ccdd6b1ffbec9338fd4c996e850e9f23f89cd055f1d4e3",
	            "SandboxKey": "/var/run/docker/netns/9eaf22c59ece",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54238"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54239"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54240"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54241"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54242"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-686300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0c88c2c552749da4ec31002e488ccc5e8184c9ce7ba360033e35a2aa2c5aead9",
	                    "EndpointID": "c09b65cdfb104f0ebd3eca48e5283746dc009186edbfa5d2e23372c6159c69c6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-686300",
	                        "f98924d46fbc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300: exit status 6 (572.5986ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:13:48.657175   13964 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-686300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25: (1.1570368s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                      │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-030800 sudo systemctl status crio --all --full --no-pager                                              │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │                     │
	│ ssh     │ -p auto-030800 sudo systemctl cat crio --no-pager                                                              │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ ssh     │ -p auto-030800 sudo crio config                                                                                │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ delete  │ -p auto-030800                                                                                                 │ auto-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:11 UTC │
	│ start   │ -p kindnet-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:11 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 pgrep -a kubelet                                                                             │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/nsswitch.conf                                                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/hosts                                                                          │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/resolv.conf                                                                    │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo crictl pods                                                                             │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo crictl ps --all                                                                         │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo ip a s                                                                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo ip r s                                                                                  │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo iptables-save                                                                           │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo iptables -t nat -L -n -v                                                                │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl status kubelet --all --full --no-pager                                        │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl cat kubelet --no-pager                                                        │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo journalctl -xeu kubelet --all --full --no-pager                                         │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/kubernetes/kubelet.conf                                                        │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /var/lib/kubelet/config.yaml                                                        │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl status docker --all --full --no-pager                                         │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl cat docker --no-pager                                                         │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/docker/daemon.json                                                             │ kindnet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:11:49
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:11:49.340795    6788 out.go:360] Setting OutFile to fd 1712 ...
	I1216 06:11:49.386344    6788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:11:49.386344    6788 out.go:374] Setting ErrFile to fd 1196...
	I1216 06:11:49.386390    6788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:11:49.401091    6788 out.go:368] Setting JSON to false
	I1216 06:11:49.404855    6788 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6531,"bootTime":1765858978,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:11:49.405055    6788 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:11:49.408997    6788 out.go:179] * [kindnet-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:11:49.412763    6788 notify.go:221] Checking for updates...
	I1216 06:11:49.414957    6788 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:11:49.416858    6788 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:11:49.419397    6788 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:11:49.421529    6788 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:11:49.423543    6788 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:11:49.426393    6788 config.go:182] Loaded profile config "kubernetes-upgrade-633300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:11:49.427388    6788 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:11:49.427640    6788 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:11:49.428138    6788 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:11:49.549056    6788 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:11:49.552567    6788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:11:49.779179    6788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:11:49.756494835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:11:49.782904    6788 out.go:179] * Using the docker driver based on user configuration
	I1216 06:11:49.786690    6788 start.go:309] selected driver: docker
	I1216 06:11:49.786719    6788 start.go:927] validating driver "docker" against <nil>
	I1216 06:11:49.786755    6788 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:11:49.871381    6788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:11:50.104061    6788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:11:50.077311907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:11:50.105056    6788 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:11:50.105056    6788 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:11:50.108056    6788 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:11:50.110058    6788 cni.go:84] Creating CNI manager for "kindnet"
	I1216 06:11:50.110058    6788 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 06:11:50.110058    6788 start.go:353] cluster config:
	{Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:11:50.112053    6788 out.go:179] * Starting "kindnet-030800" primary control-plane node in "kindnet-030800" cluster
	I1216 06:11:50.115067    6788 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:11:50.118075    6788 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:11:50.120078    6788 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:11:50.120078    6788 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:11:50.120078    6788 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:11:50.120078    6788 cache.go:65] Caching tarball of preloaded images
	I1216 06:11:50.120078    6788 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:11:50.121072    6788 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:11:50.121072    6788 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\config.json ...
	I1216 06:11:50.121072    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\config.json: {Name:mkebea825fd6dc6adf01534f5a4bb9848abba58a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:11:50.198067    6788 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:11:50.198067    6788 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:11:50.198067    6788 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:11:50.198067    6788 start.go:360] acquireMachinesLock for kindnet-030800: {Name:mk13b4d023e9ef7970ce337d36b9fc70162bc2d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:11:50.198067    6788 start.go:364] duration metric: took 0s to acquireMachinesLock for "kindnet-030800"
	I1216 06:11:50.198067    6788 start.go:93] Provisioning new machine with config: &{Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:11:50.199067    6788 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:11:50.202064    6788 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:11:50.202064    6788 start.go:159] libmachine.API.Create for "kindnet-030800" (driver="docker")
	I1216 06:11:50.202064    6788 client.go:173] LocalClient.Create starting
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Decoding PEM data...
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Parsing certificate...
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Decoding PEM data...
	I1216 06:11:50.203056    6788 main.go:143] libmachine: Parsing certificate...
	I1216 06:11:50.208057    6788 cli_runner.go:164] Run: docker network inspect kindnet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:11:50.256055    6788 cli_runner.go:211] docker network inspect kindnet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:11:50.259055    6788 network_create.go:284] running [docker network inspect kindnet-030800] to gather additional debugging logs...
	I1216 06:11:50.259055    6788 cli_runner.go:164] Run: docker network inspect kindnet-030800
	W1216 06:11:50.314050    6788 cli_runner.go:211] docker network inspect kindnet-030800 returned with exit code 1
	I1216 06:11:50.314050    6788 network_create.go:287] error running [docker network inspect kindnet-030800]: docker network inspect kindnet-030800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-030800 not found
	I1216 06:11:50.314050    6788 network_create.go:289] output of [docker network inspect kindnet-030800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-030800 not found
	
	** /stderr **
	I1216 06:11:50.318205    6788 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:11:50.407244    6788 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.423243    6788 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.439260    6788 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.454418    6788 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.470404    6788 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.485782    6788 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:11:50.499864    6788 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001585680}
	I1216 06:11:50.499864    6788 network_create.go:124] attempt to create docker network kindnet-030800 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1216 06:11:50.504590    6788 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-030800 kindnet-030800
	I1216 06:11:50.647049    6788 network_create.go:108] docker network kindnet-030800 192.168.103.0/24 created
	I1216 06:11:50.647049    6788 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-030800" container
	I1216 06:11:50.655126    6788 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:11:50.718220    6788 cli_runner.go:164] Run: docker volume create kindnet-030800 --label name.minikube.sigs.k8s.io=kindnet-030800 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:11:50.775893    6788 oci.go:103] Successfully created a docker volume kindnet-030800
	I1216 06:11:50.779320    6788 cli_runner.go:164] Run: docker run --rm --name kindnet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-030800 --entrypoint /usr/bin/test -v kindnet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:11:52.174069    6788 cli_runner.go:217] Completed: docker run --rm --name kindnet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-030800 --entrypoint /usr/bin/test -v kindnet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.3947303s)
	I1216 06:11:52.174069    6788 oci.go:107] Successfully prepared a docker volume kindnet-030800
	I1216 06:11:52.174069    6788 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:11:52.174069    6788 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:11:52.177694    6788 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:12:02.114874   11368 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:12:02.115036   11368 kubeadm.go:319] 
	I1216 06:12:02.115323   11368 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:12:02.119332   11368 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:12:02.119332   11368 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:12:02.120135   11368 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:12:02.120135   11368 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:12:02.120135   11368 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:12:02.120871   11368 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:12:02.121013   11368 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:12:02.121192   11368 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:12:02.121414   11368 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:12:02.122017   11368 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:12:02.122194   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:12:02.122408   11368 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:12:02.122510   11368 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:12:02.122753   11368 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:12:02.122840   11368 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:12:02.123033   11368 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:12:02.123163   11368 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:12:02.123310   11368 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:12:02.123421   11368 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:12:02.123572   11368 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:12:02.123980   11368 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:12:02.124094   11368 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] OS: Linux
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:12:02.124187   11368 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:12:02.124933   11368 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:12:02.125112   11368 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:12:02.125304   11368 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:12:02.125449   11368 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:12:02.125567   11368 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:12:02.125730   11368 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:12:02.125853   11368 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:12:02.125853   11368 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:12:02.126387   11368 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:12:02.126558   11368 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:12:02.407594   11368 out.go:252]   - Generating certificates and keys ...
	I1216 06:12:02.407968   11368 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:12:02.408113   11368 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:12:02.408288   11368 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:12:02.408453   11368 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:12:02.408673   11368 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:12:02.408815   11368 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:12:02.408921   11368 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:12:02.409054   11368 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:12:02.409210   11368 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:12:02.409444   11368 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:12:02.409514   11368 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:12:02.409673   11368 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:12:02.409749   11368 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:12:02.409903   11368 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:12:02.410062   11368 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:12:02.410138   11368 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:12:02.410298   11368 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:12:02.410526   11368 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:12:02.410600   11368 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:12:02.453808   11368 out.go:252]   - Booting up control plane ...
	I1216 06:12:02.454792   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:12:02.455026   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:12:02.455098   11368 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:12:02.455292   11368 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:12:02.455588   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:12:02.455804   11368 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:12:02.455984   11368 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:12:02.456047   11368 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:12:02.456475   11368 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:12:02.456689   11368 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:12:02.456759   11368 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000829212s
	I1216 06:12:02.456833   11368 kubeadm.go:319] 
	I1216 06:12:02.456918   11368 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:12:02.457018   11368 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:12:02.457186   11368 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:12:02.457264   11368 kubeadm.go:319] 
	I1216 06:12:02.457466   11368 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:12:02.457538   11368 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:12:02.457617   11368 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:12:02.457681   11368 kubeadm.go:319] 
	W1216 06:12:02.457840   11368 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829212s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:12:02.460957   11368 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 06:12:02.923334   11368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:12:02.942284   11368 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:12:02.947934   11368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:12:02.960033   11368 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:12:02.960033   11368 kubeadm.go:158] found existing configuration files:
	
	I1216 06:12:02.963699   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:12:02.976249   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:12:02.980398   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:12:02.996745   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:12:03.010587   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:12:03.014857   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:12:03.033804   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:12:03.047258   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:12:03.052529   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:12:03.071112   11368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:12:03.084411   11368 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:12:03.089634   11368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:12:03.107865   11368 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:12:03.217980   11368 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:12:03.304403   11368 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:12:03.402507   11368 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:12:07.002051    6788 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (14.8240354s)
	I1216 06:12:07.002137    6788 kic.go:203] duration metric: took 14.8278391s to extract preloaded images to volume ...
	I1216 06:12:07.005779    6788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:12:07.230944    6788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:12:07.212321642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:12:07.234947    6788 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:12:07.472678    6788 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-030800 --name kindnet-030800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-030800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-030800 --network kindnet-030800 --ip 192.168.103.2 --volume kindnet-030800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:12:08.105890    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Running}}
	I1216 06:12:08.171938    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:08.232928    6788 cli_runner.go:164] Run: docker exec kindnet-030800 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:12:08.343285    6788 oci.go:144] the created container "kindnet-030800" has a running status.
	I1216 06:12:08.343285    6788 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa...
	I1216 06:12:08.510838    6788 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:12:08.587450    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:08.650452    6788 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:12:08.650452    6788 kic_runner.go:114] Args: [docker exec --privileged kindnet-030800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:12:08.809196    6788 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa...
	I1216 06:12:10.890772    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:10.954521    6788 machine.go:94] provisionDockerMachine start ...
	I1216 06:12:10.957521    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.008520    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:11.023115    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:11.023115    6788 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:12:11.199297    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-030800
	
	I1216 06:12:11.199297    6788 ubuntu.go:182] provisioning hostname "kindnet-030800"
	I1216 06:12:11.202294    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.259757    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:11.259806    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:11.259806    6788 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-030800 && echo "kindnet-030800" | sudo tee /etc/hostname
	I1216 06:12:11.458451    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-030800
	
	I1216 06:12:11.461723    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.518816    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:11.519151    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:11.519151    6788 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-030800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-030800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-030800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:12:11.682075    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:12:11.682075    6788 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:12:11.682075    6788 ubuntu.go:190] setting up certificates
	I1216 06:12:11.682075    6788 provision.go:84] configureAuth start
	I1216 06:12:11.685801    6788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-030800
	I1216 06:12:11.740638    6788 provision.go:143] copyHostCerts
	I1216 06:12:11.741639    6788 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:12:11.741639    6788 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:12:11.741639    6788 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:12:11.742643    6788 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:12:11.742643    6788 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:12:11.742643    6788 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:12:11.743641    6788 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:12:11.743641    6788 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:12:11.743641    6788 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:12:11.744645    6788 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-030800 san=[127.0.0.1 192.168.103.2 kindnet-030800 localhost minikube]
	I1216 06:12:11.931347    6788 provision.go:177] copyRemoteCerts
	I1216 06:12:11.935348    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:12:11.939351    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:11.996758    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:12.128806    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:12:12.157528    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 06:12:12.184855    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:12:12.209875    6788 provision.go:87] duration metric: took 527.7927ms to configureAuth
	I1216 06:12:12.209875    6788 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:12:12.209875    6788 config.go:182] Loaded profile config "kindnet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:12:12.214435    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:12.270503    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:12.270548    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:12.270548    6788 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:12:12.443739    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:12:12.443821    6788 ubuntu.go:71] root file system type: overlay
	I1216 06:12:12.443969    6788 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:12:12.447696    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:12.505748    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:12.505780    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:12.505780    6788 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:12:12.696827    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:12:12.700867    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:12.760030    6788 main.go:143] libmachine: Using SSH client type: native
	I1216 06:12:12.760715    6788 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I1216 06:12:12.760715    6788 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:12:14.220671    6788 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:12:12.685444205 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:12:14.220671    6788 machine.go:97] duration metric: took 3.2661054s to provisionDockerMachine
	I1216 06:12:14.220671    6788 client.go:176] duration metric: took 24.0182853s to LocalClient.Create
	I1216 06:12:14.220671    6788 start.go:167] duration metric: took 24.0182853s to libmachine.API.Create "kindnet-030800"
	I1216 06:12:14.220671    6788 start.go:293] postStartSetup for "kindnet-030800" (driver="docker")
	I1216 06:12:14.220671    6788 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:12:14.225965    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:12:14.228654    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.286730    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:14.422175    6788 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:12:14.430679    6788 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:12:14.430679    6788 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:12:14.430679    6788 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:12:14.430679    6788 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:12:14.431304    6788 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:12:14.436062    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:12:14.447557    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:12:14.476598    6788 start.go:296] duration metric: took 255.9237ms for postStartSetup
	I1216 06:12:14.481857    6788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-030800
	I1216 06:12:14.534874    6788 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\config.json ...
	I1216 06:12:14.540932    6788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:12:14.544163    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.599153    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:14.738099    6788 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:12:14.756903    6788 start.go:128] duration metric: took 24.5575075s to createHost
	I1216 06:12:14.756964    6788 start.go:83] releasing machines lock for "kindnet-030800", held for 24.5585685s
	I1216 06:12:14.761089    6788 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-030800
	I1216 06:12:14.820995    6788 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:12:14.825383    6788 ssh_runner.go:195] Run: cat /version.json
	I1216 06:12:14.825455    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.828473    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:14.882924    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:14.883920    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:15.008272    6788 ssh_runner.go:195] Run: systemctl --version
	W1216 06:12:15.008961    6788 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:12:15.024976    6788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:12:15.035099    6788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:12:15.039160    6788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:12:15.088926    6788 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:12:15.089002    6788 start.go:496] detecting cgroup driver to use...
	I1216 06:12:15.089002    6788 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:12:15.089195    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1216 06:12:15.115148    6788 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:12:15.115148    6788 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:12:15.116205    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 06:12:15.133999    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:12:15.148544    6788 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:12:15.153402    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:12:15.173763    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:12:15.193174    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:12:15.211967    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:12:15.230814    6788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:12:15.248897    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:12:15.268590    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:12:15.286801    6788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:12:15.305083    6788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:12:15.323613    6788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:12:15.340787    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:15.499010    6788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:12:15.663518    6788 start.go:496] detecting cgroup driver to use...
	I1216 06:12:15.663548    6788 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:12:15.670359    6788 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:12:15.699486    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:12:15.720065    6788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:12:15.794660    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:12:15.815487    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:12:15.833957    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:12:15.857975    6788 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:12:15.872465    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:12:15.883658    6788 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 06:12:15.905854    6788 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:12:16.059572    6788 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:12:16.183220    6788 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:12:16.183220    6788 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:12:16.206253    6788 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:12:16.226683    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:16.363066    6788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:12:17.209602    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:12:17.234418    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:12:17.256030    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:12:17.281172    6788 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:12:17.429442    6788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:12:17.579817    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:17.730956    6788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:12:17.755884    6788 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:12:17.777180    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:17.927172    6788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:12:18.030003    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:12:18.048766    6788 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:12:18.055532    6788 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:12:18.064014    6788 start.go:564] Will wait 60s for crictl version
	I1216 06:12:18.069369    6788 ssh_runner.go:195] Run: which crictl
	I1216 06:12:18.080342    6788 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:12:18.125849    6788 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:12:18.129056    6788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:12:18.171478    6788 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:12:18.208246    6788 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:12:18.212058    6788 cli_runner.go:164] Run: docker exec -t kindnet-030800 dig +short host.docker.internal
	I1216 06:12:18.346525    6788 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:12:18.351179    6788 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:12:18.360150    6788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:12:18.377467    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:18.431980    6788 kubeadm.go:884] updating cluster {Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:12:18.432155    6788 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:12:18.435467    6788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:12:18.470599    6788 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:12:18.470599    6788 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:12:18.474251    6788 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:12:18.502607    6788 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:12:18.502607    6788 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:12:18.502607    6788 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 docker true true} ...
	I1216 06:12:18.502607    6788 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1216 06:12:18.506388    6788 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:12:18.578689    6788 cni.go:84] Creating CNI manager for "kindnet"
	I1216 06:12:18.578689    6788 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:12:18.578689    6788 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-030800 NodeName:kindnet-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:12:18.579341    6788 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kindnet-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:12:18.585628    6788 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:12:18.597522    6788 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:12:18.601494    6788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:12:18.615009    6788 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1216 06:12:18.637536    6788 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:12:18.658037    6788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 06:12:18.688118    6788 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:12:18.695892    6788 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:12:18.714307    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:18.850314    6788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:12:18.871857    6788 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800 for IP: 192.168.103.2
	I1216 06:12:18.871857    6788 certs.go:195] generating shared ca certs ...
	I1216 06:12:18.871857    6788 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:18.872460    6788 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:12:18.872580    6788 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:12:18.872580    6788 certs.go:257] generating profile certs ...
	I1216 06:12:18.873200    6788 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.key
	I1216 06:12:18.873250    6788 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.crt with IP's: []
	I1216 06:12:18.949253    6788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.crt ...
	I1216 06:12:18.949253    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.crt: {Name:mkf410fba892917bdd522929abe867e46494e3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:18.950237    6788 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.key ...
	I1216 06:12:18.950237    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\client.key: {Name:mkf29080c46ee2c14c10a21eb67c9cc815f21e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:18.951309    6788 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf
	I1216 06:12:18.951403    6788 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 06:12:19.114614    6788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf ...
	I1216 06:12:19.114614    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf: {Name:mkb55c42e33a2ae7870887e58b6e05f71dd4daf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.115619    6788 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf ...
	I1216 06:12:19.115619    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf: {Name:mk19d54f554eb9aa8025289f18eb07425aa3fc90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.116906    6788 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt.eec2d4cf -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt
	I1216 06:12:19.131178    6788 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key.eec2d4cf -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key
	I1216 06:12:19.132179    6788 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key
	I1216 06:12:19.132179    6788 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt with IP's: []
	I1216 06:12:19.184770    6788 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt ...
	I1216 06:12:19.184770    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt: {Name:mkde61e113e82c5dc4f7e40e38dd7355210b095d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.185771    6788 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key ...
	I1216 06:12:19.185771    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key: {Name:mk44855c783f1633070400559fd3d672d6875e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:19.200773    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:12:19.200773    6788 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:12:19.201509    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:12:19.201643    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:12:19.201822    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:12:19.201993    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:12:19.202166    6788 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:12:19.202444    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:12:19.237351    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:12:19.262028    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:12:19.287983    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:12:19.314234    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:12:19.339105    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:12:19.364652    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:12:19.396531    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:12:19.427432    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:12:19.459712    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:12:19.482706    6788 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:12:19.510753    6788 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:12:19.533021    6788 ssh_runner.go:195] Run: openssl version
	I1216 06:12:19.551437    6788 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.569271    6788 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:12:19.590136    6788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.598267    6788 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.602512    6788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:12:19.651072    6788 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:12:19.666426    6788 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:12:19.681980    6788 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.696016    6788 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:12:19.714282    6788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.721158    6788 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.725233    6788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:12:19.774540    6788 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:12:19.793803    6788 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:12:19.810823    6788 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.827895    6788 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:12:19.844802    6788 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.853541    6788 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.857849    6788 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:12:19.905009    6788 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:12:19.921560    6788 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:12:19.939199    6788 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:12:19.947504    6788 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:12:19.947719    6788 kubeadm.go:401] StartCluster: {Name:kindnet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:12:19.950360    6788 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:12:19.983797    6788 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:12:20.000670    6788 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:12:20.014790    6788 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:12:20.018800    6788 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:12:20.032572    6788 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:12:20.032616    6788 kubeadm.go:158] found existing configuration files:
	
	I1216 06:12:20.036680    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:12:20.049905    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:12:20.054058    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:12:20.071603    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:12:20.085088    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:12:20.089085    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:12:20.106513    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:12:20.118805    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:12:20.122049    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:12:20.142303    6788 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:12:20.154293    6788 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:12:20.158297    6788 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:12:20.174303    6788 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:12:20.296404    6788 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:12:20.301548    6788 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:12:20.397661    6788 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:12:33.968529    6788 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:12:33.968529    6788 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:12:33.968529    6788 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:12:33.969389    6788 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:12:33.969607    6788 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:12:33.969607    6788 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:12:33.972873    6788 out.go:252]   - Generating certificates and keys ...
	I1216 06:12:33.972873    6788 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:12:33.972873    6788 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:12:33.973434    6788 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:12:33.973616    6788 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:12:33.974344    6788 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:12:33.975209    6788 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:12:33.975268    6788 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:12:33.975828    6788 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:12:33.975933    6788 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:12:33.975933    6788 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:12:33.975933    6788 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:12:33.980556    6788 out.go:252]   - Booting up control plane ...
	I1216 06:12:33.980556    6788 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:12:33.981078    6788 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:12:33.981180    6788 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:12:33.981180    6788 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:12:33.981180    6788 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:12:33.981825    6788 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:12:33.981911    6788 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:12:33.981911    6788 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:12:33.981911    6788 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:12:33.982502    6788 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:12:33.982549    6788 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 503.041935ms
	I1216 06:12:33.982549    6788 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:12:33.982549    6788 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1216 06:12:33.982549    6788 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.898426957s
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.853187439s
	I1216 06:12:33.983232    6788 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502086413s
	I1216 06:12:33.983821    6788 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:12:33.983995    6788 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:12:33.983995    6788 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:12:33.984608    6788 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:12:33.984704    6788 kubeadm.go:319] [bootstrap-token] Using token: xj3a70.p80jdqi9w7ogff39
	I1216 06:12:33.994781    6788 out.go:252]   - Configuring RBAC rules ...
	I1216 06:12:33.994781    6788 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:12:33.994781    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:12:33.994781    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:12:33.995784    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:12:33.995784    6788 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:12:33.995784    6788 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:12:33.995784    6788 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:12:33.995784    6788 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:12:33.995784    6788 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:12:33.995784    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:12:33.996786    6788 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:12:33.996786    6788 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:12:33.996786    6788 kubeadm.go:319] 
	I1216 06:12:33.996786    6788 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:12:33.996786    6788 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:12:33.997793    6788 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:12:33.997793    6788 kubeadm.go:319] 
	I1216 06:12:33.997912    6788 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:12:33.997912    6788 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:12:33.997912    6788 kubeadm.go:319] 
	I1216 06:12:33.997912    6788 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xj3a70.p80jdqi9w7ogff39 \
	I1216 06:12:33.998463    6788 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:12:33.998463    6788 kubeadm.go:319] 	--control-plane 
	I1216 06:12:33.998463    6788 kubeadm.go:319] 
	I1216 06:12:33.998463    6788 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:12:33.998463    6788 kubeadm.go:319] 
	I1216 06:12:33.998463    6788 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xj3a70.p80jdqi9w7ogff39 \
	I1216 06:12:33.999035    6788 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:12:33.999035    6788 cni.go:84] Creating CNI manager for "kindnet"
	I1216 06:12:34.001665    6788 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1216 06:12:34.007658    6788 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 06:12:34.019612    6788 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 06:12:34.019612    6788 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 06:12:34.041663    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 06:12:34.320470    6788 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:12:34.325898    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:34.325972    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-030800 minikube.k8s.io/updated_at=2025_12_16T06_12_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=kindnet-030800 minikube.k8s.io/primary=true
	I1216 06:12:34.337113    6788 ops.go:34] apiserver oom_adj: -16
	I1216 06:12:34.446144    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:34.947933    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:35.448308    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:35.947898    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:36.447700    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:36.946927    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:37.445777    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:37.947107    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:38.447683    6788 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:12:38.538781    6788 kubeadm.go:1114] duration metric: took 4.2182542s to wait for elevateKubeSystemPrivileges
	I1216 06:12:38.538869    6788 kubeadm.go:403] duration metric: took 18.5909004s to StartCluster
	I1216 06:12:38.538924    6788 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:38.538924    6788 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:12:38.540348    6788 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:12:38.541592    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:12:38.541592    6788 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:12:38.541543    6788 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:12:38.541748    6788 addons.go:70] Setting storage-provisioner=true in profile "kindnet-030800"
	I1216 06:12:38.541780    6788 addons.go:239] Setting addon storage-provisioner=true in "kindnet-030800"
	I1216 06:12:38.541927    6788 host.go:66] Checking if "kindnet-030800" exists ...
	I1216 06:12:38.541927    6788 addons.go:70] Setting default-storageclass=true in profile "kindnet-030800"
	I1216 06:12:38.541927    6788 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-030800"
	I1216 06:12:38.541927    6788 config.go:182] Loaded profile config "kindnet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:12:38.544488    6788 out.go:179] * Verifying Kubernetes components...
	I1216 06:12:38.550892    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:38.550892    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:38.553045    6788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:12:38.611835    6788 addons.go:239] Setting addon default-storageclass=true in "kindnet-030800"
	I1216 06:12:38.611835    6788 host.go:66] Checking if "kindnet-030800" exists ...
	I1216 06:12:38.612829    6788 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:12:38.615828    6788 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:12:38.615828    6788 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:12:38.618830    6788 cli_runner.go:164] Run: docker container inspect kindnet-030800 --format={{.State.Status}}
	I1216 06:12:38.619830    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:38.670833    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:38.671832    6788 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:12:38.671832    6788 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:12:38.674835    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:38.728830    6788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kindnet-030800\id_rsa Username:docker}
	I1216 06:12:38.786244    6788 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:12:38.993052    6788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:12:39.294182    6788 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:12:39.393642    6788 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:12:39.901700    6788 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.1154417s)
	I1216 06:12:39.901700    6788 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:12:40.331433    6788 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.0372375s)
	I1216 06:12:40.331433    6788 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3383629s)
	I1216 06:12:40.335284    6788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-030800
	I1216 06:12:40.389545    6788 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:12:40.393549    6788 addons.go:530] duration metric: took 1.8519322s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:12:40.400560    6788 node_ready.go:35] waiting up to 15m0s for node "kindnet-030800" to be "Ready" ...
	I1216 06:12:40.413561    6788 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-030800" context rescaled to 1 replicas
	W1216 06:12:42.406617    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:44.907499    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:47.406803    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:49.908547    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:52.408158    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:54.907731    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:56.908056    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:12:59.407002    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	W1216 06:13:01.407755    6788 node_ready.go:57] node "kindnet-030800" has "Ready":"False" status (will retry)
	I1216 06:13:03.406403    6788 node_ready.go:49] node "kindnet-030800" is "Ready"
	I1216 06:13:03.406463    6788 node_ready.go:38] duration metric: took 23.0055942s for node "kindnet-030800" to be "Ready" ...
	I1216 06:13:03.406495    6788 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:13:03.411466    6788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:13:03.430701    6788 api_server.go:72] duration metric: took 24.8886193s to wait for apiserver process to appear ...
	I1216 06:13:03.430701    6788 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:13:03.430701    6788 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54866/healthz ...
	I1216 06:13:03.440994    6788 api_server.go:279] https://127.0.0.1:54866/healthz returned 200:
	ok
	I1216 06:13:03.443640    6788 api_server.go:141] control plane version: v1.34.2
	I1216 06:13:03.443640    6788 api_server.go:131] duration metric: took 12.9387ms to wait for apiserver health ...
	I1216 06:13:03.443640    6788 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:13:03.449411    6788 system_pods.go:59] 8 kube-system pods found
	I1216 06:13:03.449411    6788 system_pods.go:61] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.449411    6788 system_pods.go:61] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.449411    6788 system_pods.go:61] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.449411    6788 system_pods.go:74] duration metric: took 5.7708ms to wait for pod list to return data ...
	I1216 06:13:03.449411    6788 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:13:03.454158    6788 default_sa.go:45] found service account: "default"
	I1216 06:13:03.454158    6788 default_sa.go:55] duration metric: took 4.7472ms for default service account to be created ...
	I1216 06:13:03.454158    6788 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:13:03.462563    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:03.462563    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.462563    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.462563    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.462563    6788 retry.go:31] will retry after 200.474088ms: missing components: kube-dns
	I1216 06:13:03.671143    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:03.671143    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.671143    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.671143    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.671143    6788 retry.go:31] will retry after 243.807956ms: missing components: kube-dns
	I1216 06:13:03.922250    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:03.922250    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:03.922250    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:03.922250    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:03.922250    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:03.922340    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:03.922340    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:03.922340    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:03.922374    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:03.922374    6788 retry.go:31] will retry after 406.562398ms: missing components: kube-dns
	I1216 06:13:04.338229    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:04.338229    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:04.338229    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:04.338767    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:04.338820    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:04.338820    6788 retry.go:31] will retry after 404.864087ms: missing components: kube-dns
	I1216 06:13:04.751475    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:04.751475    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:13:04.751475    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:04.751475    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:13:04.751475    6788 retry.go:31] will retry after 580.937637ms: missing components: kube-dns
	I1216 06:13:05.340705    6788 system_pods.go:86] 8 kube-system pods found
	I1216 06:13:05.340705    6788 system_pods.go:89] "coredns-66bc5c9577-2klg5" [eab1745b-f5df-4bb5-906f-2233d85b34a7] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "etcd-kindnet-030800" [82192121-3238-41ce-898d-326f5efa932c] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-apiserver-kindnet-030800" [e7167473-f299-410f-96e7-ecb78531f96b] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-controller-manager-kindnet-030800" [5db23943-1f3f-4c87-a5f5-02580714da0d] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-proxy-w78wd" [032d3dba-9f70-408b-929d-f456c70d781d] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "kube-scheduler-kindnet-030800" [ac6c5598-5442-4924-b2c1-444474d904b3] Running
	I1216 06:13:05.340705    6788 system_pods.go:89] "storage-provisioner" [72842983-811a-4300-9476-26250c6769af] Running
	I1216 06:13:05.340705    6788 system_pods.go:126] duration metric: took 1.8865217s to wait for k8s-apps to be running ...
	I1216 06:13:05.340705    6788 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:13:05.345162    6788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:13:05.363995    6788 system_svc.go:56] duration metric: took 23.2385ms WaitForService to wait for kubelet
	I1216 06:13:05.364042    6788 kubeadm.go:587] duration metric: took 26.8218872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:13:05.364042    6788 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:13:05.368328    6788 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:13:05.368328    6788 node_conditions.go:123] node cpu capacity is 16
	I1216 06:13:05.368328    6788 node_conditions.go:105] duration metric: took 4.2856ms to run NodePressure ...
	I1216 06:13:05.368328    6788 start.go:242] waiting for startup goroutines ...
	I1216 06:13:05.368328    6788 start.go:247] waiting for cluster config update ...
	I1216 06:13:05.368328    6788 start.go:256] writing updated cluster config ...
	I1216 06:13:05.373800    6788 ssh_runner.go:195] Run: rm -f paused
	I1216 06:13:05.381487    6788 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:13:05.388287    6788 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2klg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.395940    6788 pod_ready.go:94] pod "coredns-66bc5c9577-2klg5" is "Ready"
	I1216 06:13:05.395940    6788 pod_ready.go:86] duration metric: took 7.6527ms for pod "coredns-66bc5c9577-2klg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.402352    6788 pod_ready.go:83] waiting for pod "etcd-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.409558    6788 pod_ready.go:94] pod "etcd-kindnet-030800" is "Ready"
	I1216 06:13:05.409558    6788 pod_ready.go:86] duration metric: took 7.2054ms for pod "etcd-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.413805    6788 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.423218    6788 pod_ready.go:94] pod "kube-apiserver-kindnet-030800" is "Ready"
	I1216 06:13:05.423218    6788 pod_ready.go:86] duration metric: took 9.4134ms for pod "kube-apiserver-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.426944    6788 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.790782    6788 pod_ready.go:94] pod "kube-controller-manager-kindnet-030800" is "Ready"
	I1216 06:13:05.790782    6788 pod_ready.go:86] duration metric: took 363.8334ms for pod "kube-controller-manager-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:05.989561    6788 pod_ready.go:83] waiting for pod "kube-proxy-w78wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.398538    6788 pod_ready.go:94] pod "kube-proxy-w78wd" is "Ready"
	I1216 06:13:06.398538    6788 pod_ready.go:86] duration metric: took 408.972ms for pod "kube-proxy-w78wd" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.590868    6788 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.989680    6788 pod_ready.go:94] pod "kube-scheduler-kindnet-030800" is "Ready"
	I1216 06:13:06.989680    6788 pod_ready.go:86] duration metric: took 398.2881ms for pod "kube-scheduler-kindnet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:13:06.989680    6788 pod_ready.go:40] duration metric: took 1.6081714s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:13:07.082864    6788 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:13:07.089654    6788 out.go:179] * Done! kubectl is now configured to use "kindnet-030800" cluster and "default" namespace by default
	I1216 06:13:29.437822    7444 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:13:29.437822    7444 kubeadm.go:319] 
	I1216 06:13:29.438345    7444 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:13:29.442203    7444 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:13:29.442288    7444 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:13:29.442391    7444 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:13:29.442422    7444 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:13:29.442532    7444 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:13:29.442639    7444 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:13:29.442697    7444 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:13:29.442815    7444 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:13:29.443354    7444 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:13:29.443491    7444 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:13:29.443846    7444 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:13:29.444385    7444 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:13:29.444385    7444 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:13:29.444615    7444 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:13:29.444789    7444 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:13:29.445371    7444 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:13:29.445501    7444 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:13:29.445583    7444 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:13:29.445630    7444 kubeadm.go:319] OS: Linux
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:13:29.445938    7444 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:13:29.446464    7444 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:13:29.446573    7444 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:13:29.447176    7444 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:13:29.447176    7444 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:13:29.447176    7444 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:13:29.451165    7444 out.go:252]   - Generating certificates and keys ...
	I1216 06:13:29.451165    7444 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:13:29.451165    7444 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:13:29.451741    7444 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1216 06:13:29.452307    7444 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:13:29.452892    7444 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:13:29.453414    7444 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:13:29.453588    7444 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:13:29.453727    7444 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:13:29.453727    7444 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:13:29.453727    7444 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:13:29.457212    7444 out.go:252]   - Booting up control plane ...
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:13:29.457212    7444 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:13:29.457981    7444 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:13:29.458269    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:13:29.458458    7444 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:13:29.458458    7444 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:13:29.459071    7444 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:13:29.459187    7444 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.0010934s
	I1216 06:13:29.459234    7444 kubeadm.go:319] 
	I1216 06:13:29.459234    7444 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:13:29.459234    7444 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:13:29.459234    7444 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:13:29.459234    7444 kubeadm.go:319] 
	I1216 06:13:29.459809    7444 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:13:29.459809    7444 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:13:29.459809    7444 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:13:29.459809    7444 kubeadm.go:319] 
	W1216 06:13:29.459809    7444 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-256200] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.0010934s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 06:13:29.463847    7444 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1216 06:13:29.953578    7444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:13:29.979536    7444 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:13:29.985016    7444 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:13:29.996493    7444 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:13:29.996493    7444 kubeadm.go:158] found existing configuration files:
	
	I1216 06:13:30.000490    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:13:30.012501    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:13:30.016488    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:13:30.031492    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:13:30.042509    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:13:30.046490    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:13:30.066672    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:13:30.081178    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:13:30.085494    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:13:30.103106    7444 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:13:30.115159    7444 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:13:30.119152    7444 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:13:30.134150    7444 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:13:30.260471    7444 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:13:30.351419    7444 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1216 06:13:30.450039    7444 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:13:41.144775    1840 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1216 06:13:41.144775    1840 kubeadm.go:319] 
	I1216 06:13:41.144775    1840 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1216 06:13:41.148846    1840 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1216 06:13:41.149531    1840 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:13:41.149956    1840 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1216 06:13:41.150211    1840 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1216 06:13:41.150211    1840 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1216 06:13:41.150759    1840 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1216 06:13:41.150889    1840 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1216 06:13:41.151079    1840 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1216 06:13:41.151275    1840 kubeadm.go:319] CONFIG_INET: enabled
	I1216 06:13:41.151526    1840 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1216 06:13:41.151790    1840 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1216 06:13:41.152090    1840 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1216 06:13:41.152676    1840 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1216 06:13:41.153311    1840 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1216 06:13:41.153615    1840 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1216 06:13:41.153787    1840 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1216 06:13:41.154024    1840 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] OS: Linux
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPU: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1216 06:13:41.154086    1840 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1216 06:13:41.154727    1840 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1216 06:13:41.154783    1840 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1216 06:13:41.155306    1840 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:13:41.155494    1840 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:13:41.156052    1840 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:13:41.158898    1840 out.go:252]   - Generating certificates and keys ...
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:13:41.158941    1840 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 06:13:41.159722    1840 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1216 06:13:41.159918    1840 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 06:13:41.160046    1840 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1216 06:13:41.160178    1840 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 06:13:41.160705    1840 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 06:13:41.160782    1840 kubeadm.go:319] [certs] Using the existing "sa" key
	I1216 06:13:41.160887    1840 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:13:41.160987    1840 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:13:41.161622    1840 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:13:41.161622    1840 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:13:41.164114    1840 out.go:252]   - Booting up control plane ...
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:13:41.164114    1840 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:13:41.165086    1840 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:13:41.166093    1840 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:13:41.166093    1840 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:13:41.166093    1840 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000506958s
	I1216 06:13:41.166093    1840 kubeadm.go:319] 
	I1216 06:13:41.166093    1840 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- The kubelet is not running
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 06:13:41.166093    1840 kubeadm.go:319] 
	I1216 06:13:41.166093    1840 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 06:13:41.166093    1840 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1216 06:13:41.167095    1840 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1216 06:13:41.167095    1840 kubeadm.go:319] 
	I1216 06:13:41.167095    1840 kubeadm.go:403] duration metric: took 8m4.2111844s to StartCluster
	I1216 06:13:41.167095    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 06:13:41.170749    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 06:13:41.232071    1840 cri.go:89] found id: ""
	I1216 06:13:41.232103    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.232153    1840 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:13:41.232153    1840 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 06:13:41.237864    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 06:13:41.286666    1840 cri.go:89] found id: ""
	I1216 06:13:41.286666    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.286666    1840 logs.go:284] No container was found matching "etcd"
	I1216 06:13:41.286666    1840 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 06:13:41.291424    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 06:13:41.333354    1840 cri.go:89] found id: ""
	I1216 06:13:41.333354    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.333354    1840 logs.go:284] No container was found matching "coredns"
	I1216 06:13:41.333354    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 06:13:41.337361    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 06:13:41.379362    1840 cri.go:89] found id: ""
	I1216 06:13:41.379362    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.379362    1840 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:13:41.379362    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 06:13:41.383354    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 06:13:41.434935    1840 cri.go:89] found id: ""
	I1216 06:13:41.434935    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.434935    1840 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:13:41.434935    1840 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 06:13:41.438925    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 06:13:41.481929    1840 cri.go:89] found id: ""
	I1216 06:13:41.481929    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.481929    1840 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:13:41.481929    1840 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 06:13:41.485920    1840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 06:13:41.530524    1840 cri.go:89] found id: ""
	I1216 06:13:41.530614    1840 logs.go:282] 0 containers: []
	W1216 06:13:41.530614    1840 logs.go:284] No container was found matching "kindnet"
	I1216 06:13:41.530666    1840 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:13:41.530666    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:13:41.626225    1840 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:13:41.619073   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.620556   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.621534   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.622645   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.623449   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:13:41.619073   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.620556   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.621534   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.622645   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:41.623449   10952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:13:41.626225    1840 logs.go:123] Gathering logs for Docker ...
	I1216 06:13:41.626225    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:13:41.658338    1840 logs.go:123] Gathering logs for container status ...
	I1216 06:13:41.658338    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:13:41.703328    1840 logs.go:123] Gathering logs for kubelet ...
	I1216 06:13:41.703328    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:13:41.762322    1840 logs.go:123] Gathering logs for dmesg ...
	I1216 06:13:41.762322    1840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 06:13:41.799388    1840 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1216 06:13:41.799388    1840 out.go:285] * 
	W1216 06:13:41.799388    1840 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:13:41.799388    1840 out.go:285] * 
	W1216 06:13:41.801787    1840 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:13:41.811220    1840 out.go:203] 
	W1216 06:13:41.815157    1840 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000506958s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 06:13:41.815157    1840 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 06:13:41.815157    1840 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 06:13:41.817851    1840 out.go:203] 
	
	
	==> Docker <==
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402735317Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402828927Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402844429Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402852530Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402861131Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402891834Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402934238Z" level=info msg="Initializing buildkit"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.580612363Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.589812059Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590000679Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590040684Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590028382Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:05:08 no-preload-686300 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:05:09 no-preload-686300 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:05:09 no-preload-686300 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:13:49.721904   11518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:49.722933   11518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:49.724313   11518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:49.725958   11518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:13:49.727398   11518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.809571] CPU: 0 PID: 390218 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f8788dabb20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f8788dabaf6.
	[  +0.000001] RSP: 002b:00007ffd609e6e50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.827622] CPU: 14 PID: 390383 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fddca31bb20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7fddca31baf6.
	[  +0.000001] RSP: 002b:00007ffcdf5a88f0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +7.540385] tmpfs: Unknown parameter 'noswap'
	[  +9.462694] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 06:13:49 up  1:50,  0 user,  load average: 2.73, 4.04, 3.96
	Linux no-preload-686300 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:13:46 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:47 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 328.
	Dec 16 06:13:47 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:47 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:47 no-preload-686300 kubelet[11347]: E1216 06:13:47.187224   11347 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:47 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:47 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:47 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 329.
	Dec 16 06:13:47 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:47 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:47 no-preload-686300 kubelet[11361]: E1216 06:13:47.935411   11361 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:47 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:47 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:48 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 330.
	Dec 16 06:13:48 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:48 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:48 no-preload-686300 kubelet[11400]: E1216 06:13:48.687859   11400 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:48 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:48 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:13:49 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 331.
	Dec 16 06:13:49 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:49 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:13:49 no-preload-686300 kubelet[11442]: E1216 06:13:49.440648   11442 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:13:49 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:13:49 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300: exit status 6 (628.6287ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:13:50.903054     784 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-686300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-686300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (5.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (114.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-686300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-686300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m51.7627688s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_addons_e23971240287a88151a2b5edd52daaba3879ba4a_13.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-686300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-686300 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-686300 describe deploy/metrics-server -n kube-system: exit status 1 (95.473ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-686300" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-686300 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-686300
helpers_test.go:244: (dbg) docker inspect no-preload-686300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc",
	        "Created": "2025-12-16T06:04:57.603416998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320341,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:04:57.945459203Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hosts",
	        "LogPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc-json.log",
	        "Name": "/no-preload-686300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-686300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-686300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-686300",
	                "Source": "/var/lib/docker/volumes/no-preload-686300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-686300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-686300",
	                "name.minikube.sigs.k8s.io": "no-preload-686300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9eaf22c59ece58cc41ccdd6b1ffbec9338fd4c996e850e9f23f89cd055f1d4e3",
	            "SandboxKey": "/var/run/docker/netns/9eaf22c59ece",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54238"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54239"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54240"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54241"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54242"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-686300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0c88c2c552749da4ec31002e488ccc5e8184c9ce7ba360033e35a2aa2c5aead9",
	                    "EndpointID": "c09b65cdfb104f0ebd3eca48e5283746dc009186edbfa5d2e23372c6159c69c6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-686300",
	                        "f98924d46fbc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300: exit status 6 (581.832ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:15:43.423062    3860 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-686300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25: (1.1629199s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                   │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-030800 sudo systemctl cat kubelet --no-pager                                                                                 │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo journalctl -xeu kubelet --all --full --no-pager                                                                  │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/kubernetes/kubelet.conf                                                                                 │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /var/lib/kubelet/config.yaml                                                                                 │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl status docker --all --full --no-pager                                                                  │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl cat docker --no-pager                                                                                  │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/docker/daemon.json                                                                                      │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo docker system info                                                                                               │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl status cri-docker --all --full --no-pager                                                              │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl cat cri-docker --no-pager                                                                              │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                         │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-686300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-686300 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ ssh     │ -p kindnet-030800 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                   │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cri-dockerd --version                                                                                            │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl status containerd --all --full --no-pager                                                              │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl cat containerd --no-pager                                                                              │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /lib/systemd/system/containerd.service                                                                       │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo cat /etc/containerd/config.toml                                                                                  │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo containerd config dump                                                                                           │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo systemctl status crio --all --full --no-pager                                                                    │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │                     │
	│ ssh     │ -p kindnet-030800 sudo systemctl cat crio --no-pager                                                                                    │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                          │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ ssh     │ -p kindnet-030800 sudo crio config                                                                                                      │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:13 UTC │
	│ delete  │ -p kindnet-030800                                                                                                                       │ kindnet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:13 UTC │ 16 Dec 25 06:14 UTC │
	│ start   │ -p calico-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker                            │ calico-030800     │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:14 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:14:02
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:14:02.529934   11692 out.go:360] Setting OutFile to fd 1840 ...
	I1216 06:14:02.571712   11692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:14:02.571712   11692 out.go:374] Setting ErrFile to fd 1976...
	I1216 06:14:02.571712   11692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:14:02.586987   11692 out.go:368] Setting JSON to false
	I1216 06:14:02.589871   11692 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6664,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:14:02.589871   11692 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:14:02.594448   11692 out.go:179] * [calico-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:14:02.597256   11692 notify.go:221] Checking for updates...
	I1216 06:14:02.597256   11692 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:14:02.610436   11692 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:14:02.612980   11692 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:14:02.614589   11692 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:14:02.616901   11692 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:14:02.619880   11692 config.go:182] Loaded profile config "kubernetes-upgrade-633300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:14:02.620151   11692 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:14:02.620151   11692 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:14:02.620151   11692 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:14:02.743021   11692 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:14:02.746555   11692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:14:02.994005   11692 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:14:02.974952346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:14:02.997997   11692 out.go:179] * Using the docker driver based on user configuration
	I1216 06:14:03.000997   11692 start.go:309] selected driver: docker
	I1216 06:14:03.000997   11692 start.go:927] validating driver "docker" against <nil>
	I1216 06:14:03.000997   11692 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:14:03.087069   11692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:14:03.346582   11692 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:14:03.322687204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:14:03.346582   11692 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:14:03.347577   11692 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:14:03.349577   11692 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:14:03.351577   11692 cni.go:84] Creating CNI manager for "calico"
	I1216 06:14:03.351577   11692 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1216 06:14:03.351577   11692 start.go:353] cluster config:
	{Name:calico-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:14:03.353576   11692 out.go:179] * Starting "calico-030800" primary control-plane node in "calico-030800" cluster
	I1216 06:14:03.356576   11692 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:14:03.358578   11692 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:14:03.361576   11692 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:14:03.361576   11692 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:14:03.361576   11692 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:14:03.361576   11692 cache.go:65] Caching tarball of preloaded images
	I1216 06:14:03.361576   11692 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:14:03.361576   11692 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:14:03.361576   11692 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\config.json ...
	I1216 06:14:03.362577   11692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\config.json: {Name:mk8a5991040f0281e40e8cd4c8dd677f025c0930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:03.430578   11692 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:14:03.430578   11692 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:14:03.430578   11692 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:14:03.430578   11692 start.go:360] acquireMachinesLock for calico-030800: {Name:mkf2eead692ec5d0a8599309d2c5369a445b3ac8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:14:03.430578   11692 start.go:364] duration metric: took 0s to acquireMachinesLock for "calico-030800"
	I1216 06:14:03.430578   11692 start.go:93] Provisioning new machine with config: &{Name:calico-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-030800 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:14:03.430578   11692 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:14:03.434578   11692 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:14:03.434578   11692 start.go:159] libmachine.API.Create for "calico-030800" (driver="docker")
	I1216 06:14:03.434578   11692 client.go:173] LocalClient.Create starting
	I1216 06:14:03.435577   11692 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:14:03.435577   11692 main.go:143] libmachine: Decoding PEM data...
	I1216 06:14:03.435577   11692 main.go:143] libmachine: Parsing certificate...
	I1216 06:14:03.435577   11692 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:14:03.435577   11692 main.go:143] libmachine: Decoding PEM data...
	I1216 06:14:03.435577   11692 main.go:143] libmachine: Parsing certificate...
	I1216 06:14:03.439589   11692 cli_runner.go:164] Run: docker network inspect calico-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:14:03.491452   11692 cli_runner.go:211] docker network inspect calico-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:14:03.494998   11692 network_create.go:284] running [docker network inspect calico-030800] to gather additional debugging logs...
	I1216 06:14:03.494998   11692 cli_runner.go:164] Run: docker network inspect calico-030800
	W1216 06:14:03.545493   11692 cli_runner.go:211] docker network inspect calico-030800 returned with exit code 1
	I1216 06:14:03.545493   11692 network_create.go:287] error running [docker network inspect calico-030800]: docker network inspect calico-030800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-030800 not found
	I1216 06:14:03.545493   11692 network_create.go:289] output of [docker network inspect calico-030800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-030800 not found
	
	** /stderr **
	I1216 06:14:03.549491   11692 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:14:03.624524   11692 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:14:03.640216   11692 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:14:03.655287   11692 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ec0bd0}
	I1216 06:14:03.655824   11692 network_create.go:124] attempt to create docker network calico-030800 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1216 06:14:03.658961   11692 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-030800 calico-030800
	W1216 06:14:03.708961   11692 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-030800 calico-030800 returned with exit code 1
	W1216 06:14:03.708961   11692 network_create.go:149] failed to create docker network calico-030800 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-030800 calico-030800: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1216 06:14:03.708961   11692 network_create.go:116] failed to create docker network calico-030800 192.168.67.0/24, will retry: subnet is taken
	I1216 06:14:03.734392   11692 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:14:03.747902   11692 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00172f110}
	I1216 06:14:03.747902   11692 network_create.go:124] attempt to create docker network calico-030800 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1216 06:14:03.752062   11692 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-030800 calico-030800
	W1216 06:14:03.806850   11692 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-030800 calico-030800 returned with exit code 1
	W1216 06:14:03.806850   11692 network_create.go:149] failed to create docker network calico-030800 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-030800 calico-030800: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1216 06:14:03.806850   11692 network_create.go:116] failed to create docker network calico-030800 192.168.76.0/24, will retry: subnet is taken
	I1216 06:14:03.826265   11692 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:14:03.856831   11692 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:14:03.871990   11692 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:14:03.884986   11692 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018400c0}
	I1216 06:14:03.884986   11692 network_create.go:124] attempt to create docker network calico-030800 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1216 06:14:03.888263   11692 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-030800 calico-030800
	I1216 06:14:04.030008   11692 network_create.go:108] docker network calico-030800 192.168.103.0/24 created
	I1216 06:14:04.030045   11692 kic.go:121] calculated static IP "192.168.103.2" for the "calico-030800" container
	I1216 06:14:04.038521   11692 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:14:04.109396   11692 cli_runner.go:164] Run: docker volume create calico-030800 --label name.minikube.sigs.k8s.io=calico-030800 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:14:04.170266   11692 oci.go:103] Successfully created a docker volume calico-030800
	I1216 06:14:04.174600   11692 cli_runner.go:164] Run: docker run --rm --name calico-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-030800 --entrypoint /usr/bin/test -v calico-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:14:05.641512   11692 cli_runner.go:217] Completed: docker run --rm --name calico-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-030800 --entrypoint /usr/bin/test -v calico-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.466833s)
	I1216 06:14:05.641596   11692 oci.go:107] Successfully prepared a docker volume calico-030800
	I1216 06:14:05.641769   11692 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:14:05.641801   11692 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:14:05.645671   11692 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:14:20.333053   11692 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (14.6871317s)
	I1216 06:14:20.333134   11692 kic.go:203] duration metric: took 14.6911092s to extract preloaded images to volume ...
	I1216 06:14:20.336962   11692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:14:20.571066   11692 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:14:20.547440486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:14:20.576439   11692 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:14:20.806724   11692 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-030800 --name calico-030800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-030800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-030800 --network calico-030800 --ip 192.168.103.2 --volume calico-030800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:14:21.508024   11692 cli_runner.go:164] Run: docker container inspect calico-030800 --format={{.State.Running}}
	I1216 06:14:21.569521   11692 cli_runner.go:164] Run: docker container inspect calico-030800 --format={{.State.Status}}
	I1216 06:14:21.635518   11692 cli_runner.go:164] Run: docker exec calico-030800 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:14:21.750432   11692 oci.go:144] the created container "calico-030800" has a running status.
	I1216 06:14:21.750432   11692 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-030800\id_rsa...
	I1216 06:14:21.849925   11692 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-030800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:14:21.931525   11692 cli_runner.go:164] Run: docker container inspect calico-030800 --format={{.State.Status}}
	I1216 06:14:21.995237   11692 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:14:21.995237   11692 kic_runner.go:114] Args: [docker exec --privileged calico-030800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:14:22.120618   11692 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-030800\id_rsa...
	I1216 06:14:24.230283   11692 cli_runner.go:164] Run: docker container inspect calico-030800 --format={{.State.Status}}
	I1216 06:14:24.288373   11692 machine.go:94] provisionDockerMachine start ...
	I1216 06:14:24.292373   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:24.349775   11692 main.go:143] libmachine: Using SSH client type: native
	I1216 06:14:24.364223   11692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55042 <nil> <nil>}
	I1216 06:14:24.364283   11692 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:14:24.542766   11692 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-030800
	
	I1216 06:14:24.542808   11692 ubuntu.go:182] provisioning hostname "calico-030800"
	I1216 06:14:24.547105   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:24.605153   11692 main.go:143] libmachine: Using SSH client type: native
	I1216 06:14:24.605763   11692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55042 <nil> <nil>}
	I1216 06:14:24.605814   11692 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-030800 && echo "calico-030800" | sudo tee /etc/hostname
	I1216 06:14:24.783187   11692 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-030800
	
	I1216 06:14:24.787296   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:24.846984   11692 main.go:143] libmachine: Using SSH client type: native
	I1216 06:14:24.847238   11692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55042 <nil> <nil>}
	I1216 06:14:24.847238   11692 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-030800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-030800/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-030800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:14:25.011933   11692 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:14:25.011933   11692 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:14:25.011933   11692 ubuntu.go:190] setting up certificates
	I1216 06:14:25.011933   11692 provision.go:84] configureAuth start
	I1216 06:14:25.015205   11692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-030800
	I1216 06:14:25.070072   11692 provision.go:143] copyHostCerts
	I1216 06:14:25.071222   11692 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:14:25.071248   11692 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:14:25.071650   11692 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:14:25.072349   11692 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:14:25.072349   11692 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:14:25.072870   11692 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:14:25.073749   11692 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:14:25.073801   11692 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:14:25.073848   11692 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:14:25.074467   11692 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-030800 san=[127.0.0.1 192.168.103.2 calico-030800 localhost minikube]
	I1216 06:14:25.240585   11692 provision.go:177] copyRemoteCerts
	I1216 06:14:25.243807   11692 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:14:25.247143   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:25.302939   11692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55042 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-030800\id_rsa Username:docker}
	I1216 06:14:25.428914   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 06:14:25.460293   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:14:25.483456   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:14:25.508168   11692 provision.go:87] duration metric: took 496.2289ms to configureAuth
	I1216 06:14:25.508168   11692 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:14:25.508168   11692 config.go:182] Loaded profile config "calico-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:14:25.511992   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:25.568828   11692 main.go:143] libmachine: Using SSH client type: native
	I1216 06:14:25.569097   11692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55042 <nil> <nil>}
	I1216 06:14:25.569097   11692 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:14:25.743813   11692 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:14:25.743854   11692 ubuntu.go:71] root file system type: overlay
	I1216 06:14:25.743854   11692 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:14:25.747926   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:25.805885   11692 main.go:143] libmachine: Using SSH client type: native
	I1216 06:14:25.805885   11692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55042 <nil> <nil>}
	I1216 06:14:25.805885   11692 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:14:25.982940   11692 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:14:25.987557   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:26.043426   11692 main.go:143] libmachine: Using SSH client type: native
	I1216 06:14:26.044136   11692 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55042 <nil> <nil>}
	I1216 06:14:26.044136   11692 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:14:27.527904   11692 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:14:25.967032483 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:14:27.527904   11692 machine.go:97] duration metric: took 3.2394871s to provisionDockerMachine
	I1216 06:14:27.528421   11692 client.go:176] duration metric: took 24.0935174s to LocalClient.Create
	I1216 06:14:27.528421   11692 start.go:167] duration metric: took 24.0935174s to libmachine.API.Create "calico-030800"
	I1216 06:14:27.528421   11692 start.go:293] postStartSetup for "calico-030800" (driver="docker")
	I1216 06:14:27.528506   11692 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:14:27.533124   11692 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:14:27.535893   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:27.592739   11692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55042 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-030800\id_rsa Username:docker}
	I1216 06:14:27.730231   11692 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:14:27.737221   11692 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:14:27.737221   11692 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:14:27.737221   11692 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:14:27.740207   11692 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:14:27.740975   11692 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:14:27.748488   11692 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:14:27.762854   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:14:27.790394   11692 start.go:296] duration metric: took 261.9693ms for postStartSetup
	I1216 06:14:27.795545   11692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-030800
	I1216 06:14:27.849199   11692 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\config.json ...
	I1216 06:14:27.854975   11692 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:14:27.858192   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:27.913434   11692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55042 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-030800\id_rsa Username:docker}
	I1216 06:14:28.037144   11692 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:14:28.046882   11692 start.go:128] duration metric: took 24.6159718s to createHost
	I1216 06:14:28.046882   11692 start.go:83] releasing machines lock for "calico-030800", held for 24.6159718s
	I1216 06:14:28.050903   11692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-030800
	I1216 06:14:28.104972   11692 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:14:28.109626   11692 ssh_runner.go:195] Run: cat /version.json
	I1216 06:14:28.109626   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:28.112105   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:28.163922   11692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55042 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-030800\id_rsa Username:docker}
	I1216 06:14:28.165104   11692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55042 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-030800\id_rsa Username:docker}
	W1216 06:14:28.290120   11692 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:14:28.295445   11692 ssh_runner.go:195] Run: systemctl --version
	I1216 06:14:28.313383   11692 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:14:28.322715   11692 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:14:28.326969   11692 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:14:28.393247   11692 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:14:28.393247   11692 start.go:496] detecting cgroup driver to use...
	I1216 06:14:28.393247   11692 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:14:28.393858   11692 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1216 06:14:28.401119   11692 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:14:28.401119   11692 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:14:28.425623   11692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 06:14:28.443614   11692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:14:28.456616   11692 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:14:28.460613   11692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:14:28.477233   11692 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:14:28.495764   11692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:14:28.513158   11692 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:14:28.532969   11692 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:14:28.551997   11692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:14:28.570783   11692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:14:28.589300   11692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:14:28.608687   11692 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:14:28.625927   11692 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:14:28.645122   11692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:14:28.778597   11692 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:14:28.933476   11692 start.go:496] detecting cgroup driver to use...
	I1216 06:14:28.933476   11692 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:14:28.938333   11692 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:14:28.962796   11692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:14:28.983871   11692 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:14:29.042391   11692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:14:29.063943   11692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:14:29.083216   11692 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:14:29.109847   11692 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:14:29.124143   11692 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:14:29.135955   11692 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 06:14:29.162709   11692 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:14:29.297324   11692 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:14:29.425258   11692 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:14:29.425479   11692 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:14:29.450930   11692 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:14:29.471425   11692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:14:29.607306   11692 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:14:30.415810   11692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:14:30.437215   11692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:14:30.463718   11692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:14:30.489203   11692 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:14:30.636561   11692 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:14:30.791091   11692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:14:30.933683   11692 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:14:30.958083   11692 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:14:30.979421   11692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:14:31.122463   11692 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:14:31.221945   11692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:14:31.240927   11692 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:14:31.244519   11692 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:14:31.253233   11692 start.go:564] Will wait 60s for crictl version
	I1216 06:14:31.257633   11692 ssh_runner.go:195] Run: which crictl
	I1216 06:14:31.267529   11692 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:14:31.311415   11692 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:14:31.314494   11692 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:14:31.353961   11692 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:14:31.400018   11692 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:14:31.403588   11692 cli_runner.go:164] Run: docker exec -t calico-030800 dig +short host.docker.internal
	I1216 06:14:31.533022   11692 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:14:31.537548   11692 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:14:31.544312   11692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:14:31.566816   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:31.621438   11692 kubeadm.go:884] updating cluster {Name:calico-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:14:31.621833   11692 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:14:31.626955   11692 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:14:31.660711   11692 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:14:31.660780   11692 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:14:31.664818   11692 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:14:31.694093   11692 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:14:31.694304   11692 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:14:31.694460   11692 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 docker true true} ...
	I1216 06:14:31.695771   11692 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1216 06:14:31.699222   11692 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:14:31.779953   11692 cni.go:84] Creating CNI manager for "calico"
	I1216 06:14:31.780034   11692 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:14:31.780079   11692 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-030800 NodeName:calico-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:14:31.780079   11692 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "calico-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:14:31.784395   11692 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:14:31.796997   11692 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:14:31.800996   11692 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:14:31.812697   11692 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1216 06:14:31.832079   11692 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:14:31.851078   11692 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1216 06:14:31.874341   11692 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:14:31.879986   11692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:14:31.899182   11692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:14:32.042362   11692 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:14:32.062982   11692 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800 for IP: 192.168.103.2
	I1216 06:14:32.062982   11692 certs.go:195] generating shared ca certs ...
	I1216 06:14:32.062982   11692 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:32.063651   11692 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:14:32.063651   11692 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:14:32.064228   11692 certs.go:257] generating profile certs ...
	I1216 06:14:32.064447   11692 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\client.key
	I1216 06:14:32.064447   11692 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\client.crt with IP's: []
	I1216 06:14:32.144719   11692 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\client.crt ...
	I1216 06:14:32.145717   11692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\client.crt: {Name:mke7ab3ba73172b972a8f5ab10c6eda39dc75a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:32.146557   11692 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\client.key ...
	I1216 06:14:32.146557   11692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\client.key: {Name:mk158c2fd0a2d67de65eca325967b1d8b12af563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:32.147035   11692 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.key.8ac4ec5e
	I1216 06:14:32.147035   11692 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.crt.8ac4ec5e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 06:14:32.394986   11692 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.crt.8ac4ec5e ...
	I1216 06:14:32.394986   11692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.crt.8ac4ec5e: {Name:mk70f1b1a1e7e151c3736630a40cd2e440f2b743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:32.395853   11692 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.key.8ac4ec5e ...
	I1216 06:14:32.395853   11692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.key.8ac4ec5e: {Name:mkb89e8cff3fba60e63fa2ebdf514fb4d0235e6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:32.397254   11692 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.crt.8ac4ec5e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.crt
	I1216 06:14:32.412457   11692 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.key.8ac4ec5e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.key
	I1216 06:14:32.413040   11692 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\proxy-client.key
	I1216 06:14:32.413040   11692 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\proxy-client.crt with IP's: []
	I1216 06:14:32.695788   11692 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\proxy-client.crt ...
	I1216 06:14:32.695788   11692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\proxy-client.crt: {Name:mk5e632a4d1c493deeaee3db013074969d24c885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:32.696877   11692 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\proxy-client.key ...
	I1216 06:14:32.696877   11692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\proxy-client.key: {Name:mk8c25431813043e7ae8c07195f74aa32d477bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:32.712088   11692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:14:32.712498   11692 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:14:32.712498   11692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:14:32.712774   11692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:14:32.712964   11692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:14:32.713172   11692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:14:32.713328   11692 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:14:32.713647   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:14:32.744476   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:14:32.772132   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:14:32.795790   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:14:32.821044   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:14:32.843968   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:14:32.876384   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:14:32.905803   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:14:32.933513   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:14:32.961957   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:14:32.983912   11692 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:14:33.009256   11692 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:14:33.030738   11692 ssh_runner.go:195] Run: openssl version
	I1216 06:14:33.045144   11692 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:14:33.061897   11692 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:14:33.078447   11692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:14:33.086309   11692 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:14:33.089657   11692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:14:33.136742   11692 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:14:33.154304   11692 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:14:33.173312   11692 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:14:33.190693   11692 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:14:33.207686   11692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:14:33.215175   11692 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:14:33.219349   11692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:14:33.268156   11692 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:14:33.283712   11692 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:14:33.301655   11692 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:14:33.319284   11692 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:14:33.336737   11692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:14:33.347147   11692 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:14:33.351432   11692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:14:33.397960   11692 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:14:33.413712   11692 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:14:33.435562   11692 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:14:33.445390   11692 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:14:33.445420   11692 kubeadm.go:401] StartCluster: {Name:calico-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:14:33.448499   11692 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:14:33.483284   11692 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:14:33.499303   11692 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:14:33.513448   11692 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:14:33.517864   11692 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:14:33.530749   11692 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:14:33.530781   11692 kubeadm.go:158] found existing configuration files:
	
	I1216 06:14:33.534706   11692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:14:33.548954   11692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:14:33.553210   11692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:14:33.571769   11692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:14:33.584983   11692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:14:33.589042   11692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:14:33.607502   11692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:14:33.624788   11692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:14:33.628608   11692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:14:33.649004   11692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:14:33.664695   11692 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:14:33.670094   11692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:14:33.685718   11692 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:14:33.798240   11692 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:14:33.805792   11692 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:14:33.898925   11692 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 06:14:48.201714   11692 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:14:48.201714   11692 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:14:48.201714   11692 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:14:48.201714   11692 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:14:48.202358   11692 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:14:48.202426   11692 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:14:48.207934   11692 out.go:252]   - Generating certificates and keys ...
	I1216 06:14:48.207934   11692 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:14:48.207934   11692 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:14:48.208573   11692 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:14:48.208683   11692 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:14:48.208821   11692 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:14:48.208821   11692 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:14:48.208821   11692 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:14:48.208821   11692 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:14:48.208821   11692 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:14:48.209486   11692 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:14:48.209486   11692 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:14:48.209486   11692 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:14:48.209486   11692 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:14:48.210010   11692 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:14:48.210043   11692 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:14:48.210043   11692 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:14:48.210043   11692 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:14:48.210043   11692 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:14:48.210043   11692 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:14:48.210697   11692 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:14:48.210697   11692 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:14:48.212894   11692 out.go:252]   - Booting up control plane ...
	I1216 06:14:48.213213   11692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:14:48.213355   11692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:14:48.213500   11692 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:14:48.213666   11692 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:14:48.213666   11692 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:14:48.213666   11692 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:14:48.213666   11692 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:14:48.213666   11692 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:14:48.213666   11692 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:14:48.214713   11692 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:14:48.214856   11692 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.006269182s
	I1216 06:14:48.215022   11692 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:14:48.215171   11692 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1216 06:14:48.215331   11692 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:14:48.215526   11692 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:14:48.215685   11692 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.817013815s
	I1216 06:14:48.215845   11692 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.716469167s
	I1216 06:14:48.215998   11692 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.501877586s
	I1216 06:14:48.216136   11692 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:14:48.216335   11692 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:14:48.216335   11692 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:14:48.216659   11692 kubeadm.go:319] [mark-control-plane] Marking the node calico-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:14:48.216659   11692 kubeadm.go:319] [bootstrap-token] Using token: tdpj9h.rkbcokdny8n53hpp
	I1216 06:14:48.219622   11692 out.go:252]   - Configuring RBAC rules ...
	I1216 06:14:48.219622   11692 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:14:48.219622   11692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:14:48.220005   11692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:14:48.220005   11692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:14:48.220005   11692 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:14:48.220844   11692 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:14:48.221016   11692 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:14:48.221161   11692 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:14:48.221326   11692 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:14:48.221326   11692 kubeadm.go:319] 
	I1216 06:14:48.221494   11692 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:14:48.221494   11692 kubeadm.go:319] 
	I1216 06:14:48.221659   11692 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:14:48.221659   11692 kubeadm.go:319] 
	I1216 06:14:48.221802   11692 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:14:48.221949   11692 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:14:48.221949   11692 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:14:48.222034   11692 kubeadm.go:319] 
	I1216 06:14:48.222134   11692 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:14:48.222134   11692 kubeadm.go:319] 
	I1216 06:14:48.222134   11692 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:14:48.222283   11692 kubeadm.go:319] 
	I1216 06:14:48.222434   11692 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:14:48.222554   11692 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:14:48.222554   11692 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:14:48.222554   11692 kubeadm.go:319] 
	I1216 06:14:48.222554   11692 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:14:48.222554   11692 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:14:48.222554   11692 kubeadm.go:319] 
	I1216 06:14:48.223222   11692 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tdpj9h.rkbcokdny8n53hpp \
	I1216 06:14:48.223465   11692 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:14:48.223465   11692 kubeadm.go:319] 	--control-plane 
	I1216 06:14:48.223465   11692 kubeadm.go:319] 
	I1216 06:14:48.223757   11692 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:14:48.223757   11692 kubeadm.go:319] 
	I1216 06:14:48.223921   11692 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tdpj9h.rkbcokdny8n53hpp \
	I1216 06:14:48.224131   11692 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:14:48.224131   11692 cni.go:84] Creating CNI manager for "calico"
	I1216 06:14:48.229368   11692 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1216 06:14:48.232218   11692 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1216 06:14:48.232218   11692 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1216 06:14:48.259668   11692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 06:14:50.395309   11692 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.1356116s)
	I1216 06:14:50.395309   11692 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:14:50.401245   11692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-030800 minikube.k8s.io/updated_at=2025_12_16T06_14_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=calico-030800 minikube.k8s.io/primary=true
	I1216 06:14:50.401911   11692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:50.411292   11692 ops.go:34] apiserver oom_adj: -16
	I1216 06:14:50.538056   11692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:51.039703   11692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:51.538981   11692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:52.039468   11692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:52.539216   11692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:53.039470   11692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:14:53.180389   11692 kubeadm.go:1114] duration metric: took 2.7845161s to wait for elevateKubeSystemPrivileges
	I1216 06:14:53.180452   11692 kubeadm.go:403] duration metric: took 19.7347657s to StartCluster
	I1216 06:14:53.180516   11692 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:53.180713   11692 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:14:53.182193   11692 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:14:53.183461   11692 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:14:53.183550   11692 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:14:53.183786   11692 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:14:53.183786   11692 addons.go:70] Setting storage-provisioner=true in profile "calico-030800"
	I1216 06:14:53.183786   11692 addons.go:239] Setting addon storage-provisioner=true in "calico-030800"
	I1216 06:14:53.183786   11692 host.go:66] Checking if "calico-030800" exists ...
	I1216 06:14:53.183786   11692 addons.go:70] Setting default-storageclass=true in profile "calico-030800"
	I1216 06:14:53.183786   11692 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-030800"
	I1216 06:14:53.183786   11692 config.go:182] Loaded profile config "calico-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:14:53.186398   11692 out.go:179] * Verifying Kubernetes components...
	I1216 06:14:53.194886   11692 cli_runner.go:164] Run: docker container inspect calico-030800 --format={{.State.Status}}
	I1216 06:14:53.195596   11692 cli_runner.go:164] Run: docker container inspect calico-030800 --format={{.State.Status}}
	I1216 06:14:53.196221   11692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:14:53.254665   11692 addons.go:239] Setting addon default-storageclass=true in "calico-030800"
	I1216 06:14:53.254665   11692 host.go:66] Checking if "calico-030800" exists ...
	I1216 06:14:53.256665   11692 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:14:53.259665   11692 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:14:53.259665   11692 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:14:53.261665   11692 cli_runner.go:164] Run: docker container inspect calico-030800 --format={{.State.Status}}
	I1216 06:14:53.262665   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:53.321493   11692 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:14:53.321493   11692 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:14:53.322493   11692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55042 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-030800\id_rsa Username:docker}
	I1216 06:14:53.324499   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:53.375000   11692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55042 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-030800\id_rsa Username:docker}
	I1216 06:14:53.582033   11692 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:14:53.887001   11692 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:14:53.889253   11692 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:14:53.892511   11692 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:14:54.597013   11692 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.0149661s)
	I1216 06:14:54.597013   11692 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:14:54.602946   11692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-030800
	I1216 06:14:54.665797   11692 node_ready.go:35] waiting up to 15m0s for node "calico-030800" to be "Ready" ...
	I1216 06:14:55.103461   11692 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-030800" context rescaled to 1 replicas
	I1216 06:14:55.160160   11692 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2708901s)
	I1216 06:14:55.160160   11692 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.2676321s)
	I1216 06:14:55.179400   11692 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:14:55.181248   11692 addons.go:530] duration metric: took 1.9974729s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1216 06:14:56.673073   11692 node_ready.go:57] node "calico-030800" has "Ready":"False" status (will retry)
	W1216 06:14:58.674959   11692 node_ready.go:57] node "calico-030800" has "Ready":"False" status (will retry)
	W1216 06:15:01.172579   11692 node_ready.go:57] node "calico-030800" has "Ready":"False" status (will retry)
	W1216 06:15:03.670555   11692 node_ready.go:57] node "calico-030800" has "Ready":"False" status (will retry)
	W1216 06:15:05.671541   11692 node_ready.go:57] node "calico-030800" has "Ready":"False" status (will retry)
	W1216 06:15:07.672292   11692 node_ready.go:57] node "calico-030800" has "Ready":"False" status (will retry)
	W1216 06:15:09.672344   11692 node_ready.go:57] node "calico-030800" has "Ready":"False" status (will retry)
	I1216 06:15:11.671505   11692 node_ready.go:49] node "calico-030800" is "Ready"
	I1216 06:15:11.671505   11692 node_ready.go:38] duration metric: took 17.0054788s for node "calico-030800" to be "Ready" ...
	I1216 06:15:11.671505   11692 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:15:11.676498   11692 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:15:11.700509   11692 api_server.go:72] duration metric: took 18.5166191s to wait for apiserver process to appear ...
	I1216 06:15:11.700509   11692 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:15:11.700509   11692 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55041/healthz ...
	I1216 06:15:11.711496   11692 api_server.go:279] https://127.0.0.1:55041/healthz returned 200:
	ok
	I1216 06:15:11.714496   11692 api_server.go:141] control plane version: v1.34.2
	I1216 06:15:11.714496   11692 api_server.go:131] duration metric: took 13.9864ms to wait for apiserver health ...
	I1216 06:15:11.714496   11692 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:15:11.721495   11692 system_pods.go:59] 9 kube-system pods found
	I1216 06:15:11.721495   11692 system_pods.go:61] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:11.721495   11692 system_pods.go:61] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:11.721495   11692 system_pods.go:61] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:11.721495   11692 system_pods.go:61] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:11.721495   11692 system_pods.go:61] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:11.721495   11692 system_pods.go:61] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:11.721495   11692 system_pods.go:61] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:11.721495   11692 system_pods.go:61] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:11.721495   11692 system_pods.go:61] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:15:11.721495   11692 system_pods.go:74] duration metric: took 6.9995ms to wait for pod list to return data ...
	I1216 06:15:11.721495   11692 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:15:11.727495   11692 default_sa.go:45] found service account: "default"
	I1216 06:15:11.727495   11692 default_sa.go:55] duration metric: took 5.9995ms for default service account to be created ...
	I1216 06:15:11.727495   11692 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:15:11.733523   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:11.733523   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:11.733523   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:11.733523   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:11.733523   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:11.733523   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:11.733523   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:11.733523   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:11.733523   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:11.733523   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:15:11.733523   11692 retry.go:31] will retry after 272.059303ms: missing components: kube-dns
	I1216 06:15:12.072135   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:12.072135   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:12.072135   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:12.072135   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:12.072135   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:12.072135   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:12.072135   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:12.072135   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:12.072135   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:12.072135   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:15:12.072135   11692 retry.go:31] will retry after 366.846872ms: missing components: kube-dns
	I1216 06:15:12.448657   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:12.448657   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:12.448657   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:12.448657   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:12.448657   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:12.448657   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:12.448657   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:12.448657   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:12.448657   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:12.448657   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:15:12.448657   11692 retry.go:31] will retry after 382.692128ms: missing components: kube-dns
	I1216 06:15:12.840029   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:12.840029   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:12.840029   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:12.840029   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:12.840029   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:12.840029   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:12.840029   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:12.840029   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:12.840029   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:12.840029   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:15:12.840029   11692 retry.go:31] will retry after 468.416644ms: missing components: kube-dns
	I1216 06:15:13.318366   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:13.318366   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:13.318366   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:13.318366   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:13.318366   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:13.318366   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:13.318366   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:13.318366   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:13.318366   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:13.318366   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:15:13.318366   11692 retry.go:31] will retry after 674.851046ms: missing components: kube-dns
	I1216 06:15:14.002502   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:14.002502   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:14.002502   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:14.002502   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:14.002502   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:14.002502   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:14.002502   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:14.002502   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:14.002502   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:14.002502   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:15:14.002502   11692 retry.go:31] will retry after 654.167111ms: missing components: kube-dns
	I1216 06:15:14.674273   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:14.674273   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:14.674273   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:14.674273   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:14.674273   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:14.674273   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:14.674273   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:14.674273   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:14.674273   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:14.674273   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Running
	I1216 06:15:14.674273   11692 retry.go:31] will retry after 741.823261ms: missing components: kube-dns
	I1216 06:15:15.425085   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:15.425085   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:15.425085   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:15.425085   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:15.425085   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:15.425085   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:15.425085   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:15.425085   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:15.425085   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:15.425085   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Running
	I1216 06:15:15.425085   11692 retry.go:31] will retry after 984.485954ms: missing components: kube-dns
	I1216 06:15:16.418946   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:16.419025   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:16.419025   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:16.419025   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:16.419025   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:16.419025   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:16.419025   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:16.419025   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:16.419025   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:16.419025   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Running
	I1216 06:15:16.419113   11692 retry.go:31] will retry after 1.169002412s: missing components: kube-dns
	I1216 06:15:17.596202   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:17.596236   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:17.596264   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:17.596264   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:17.596264   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:17.596264   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:17.596264   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:17.596264   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:17.596264   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:17.596264   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Running
	I1216 06:15:17.596264   11692 retry.go:31] will retry after 2.249096826s: missing components: kube-dns
	I1216 06:15:19.869747   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:19.869747   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:19.869747   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:19.869747   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:19.869747   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:19.869747   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:19.869747   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:19.869747   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:19.869747   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:19.869747   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Running
	I1216 06:15:19.869747   11692 retry.go:31] will retry after 2.676110047s: missing components: kube-dns
	I1216 06:15:22.580697   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:22.581231   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:22.581231   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:22.581231   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:22.581231   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:22.581231   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:22.581290   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:22.581290   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:22.581290   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:22.581290   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Running
	I1216 06:15:22.581290   11692 retry.go:31] will retry after 3.281173656s: missing components: kube-dns
	I1216 06:15:25.873703   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:25.873703   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:25.873703   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:25.873837   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:25.873837   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:25.873880   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:25.873880   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:25.873880   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:25.873880   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:25.873977   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Running
	I1216 06:15:25.874054   11692 retry.go:31] will retry after 3.621610046s: missing components: kube-dns
	I1216 06:15:29.504117   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:29.504656   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:29.504656   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:29.504699   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:29.504699   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:29.504699   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:29.504699   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:29.504699   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:29.504699   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:29.504699   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Running
	I1216 06:15:29.504794   11692 retry.go:31] will retry after 5.30631292s: missing components: kube-dns
	I1216 06:15:34.819080   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:34.819179   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:34.819179   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:34.819229   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:34.819229   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:34.819286   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:34.819286   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:34.819286   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:34.819286   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:34.819286   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Running
	I1216 06:15:34.819365   11692 retry.go:31] will retry after 4.432035652s: missing components: kube-dns
	I1216 06:15:39.259526   11692 system_pods.go:86] 9 kube-system pods found
	I1216 06:15:39.259568   11692 system_pods.go:89] "calico-kube-controllers-5c676f698c-mff5d" [baaacd8e-234d-46c9-8f36-f014ec7a9417] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1216 06:15:39.259568   11692 system_pods.go:89] "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1216 06:15:39.259568   11692 system_pods.go:89] "coredns-66bc5c9577-j7vnq" [bbe0a84b-a582-4aa9-a610-5922f145fca3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:15:39.259568   11692 system_pods.go:89] "etcd-calico-030800" [5536afcd-fb4d-4dfe-a85a-a3880c70de84] Running
	I1216 06:15:39.259568   11692 system_pods.go:89] "kube-apiserver-calico-030800" [6df6f2fc-d402-4cd4-b287-5d0b6807753f] Running
	I1216 06:15:39.259568   11692 system_pods.go:89] "kube-controller-manager-calico-030800" [d62d546a-0765-4068-9bb4-7e84e30875b7] Running
	I1216 06:15:39.259568   11692 system_pods.go:89] "kube-proxy-qdm7q" [3c4bcc61-39fa-427a-b596-52a4203cc8b6] Running
	I1216 06:15:39.259568   11692 system_pods.go:89] "kube-scheduler-calico-030800" [02746219-ae1e-41c0-b53e-1433cc3f2da7] Running
	I1216 06:15:39.259568   11692 system_pods.go:89] "storage-provisioner" [380978b4-64ab-4975-9c07-a976582834d8] Running
	I1216 06:15:39.259568   11692 retry.go:31] will retry after 5.54930856s: missing components: kube-dns
	
	
	==> Docker <==
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402735317Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402828927Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402844429Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402852530Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402861131Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402891834Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.402934238Z" level=info msg="Initializing buildkit"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.580612363Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.589812059Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590000679Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590040684Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:05:08 no-preload-686300 dockerd[1175]: time="2025-12-16T06:05:08.590028382Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:05:08 no-preload-686300 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:05:09 no-preload-686300 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:05:09 no-preload-686300 cri-dockerd[1466]: time="2025-12-16T06:05:09Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:05:09 no-preload-686300 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:15:44.486427   13814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:15:44.487622   13814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:15:44.488537   13814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:15:44.490800   13814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:15:44.491707   13814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.597981] CPU: 15 PID: 403663 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7ff53f0eeb20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7ff53f0eeaf6.
	[  +0.000001] RSP: 002b:00007ffe3946e500 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.787002] CPU: 14 PID: 403816 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f82d2ac3b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f82d2ac3af6.
	[  +0.000001] RSP: 002b:00007fff7fa0c690 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000000] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000000] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +9.482562] tmpfs: Unknown parameter 'noswap'
	[  +8.547815] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 06:15:44 up  1:52,  0 user,  load average: 3.82, 4.04, 3.96
	Linux no-preload-686300 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:15:41 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:15:42 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 16 06:15:42 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:15:42 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:15:42 no-preload-686300 kubelet[13633]: E1216 06:15:42.171673   13633 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:15:42 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:15:42 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:15:42 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 16 06:15:42 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:15:42 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:15:42 no-preload-686300 kubelet[13658]: E1216 06:15:42.927903   13658 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:15:42 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:15:42 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:15:43 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 16 06:15:43 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:15:43 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:15:43 no-preload-686300 kubelet[13688]: E1216 06:15:43.673656   13688 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:15:43 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:15:43 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:15:44 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 16 06:15:44 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:15:44 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:15:44 no-preload-686300 kubelet[13786]: E1216 06:15:44.431890   13786 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:15:44 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:15:44 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300: exit status 6 (567.6761ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:15:45.401243   10212 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-686300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-686300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (114.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (379.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-686300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-686300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m16.0195492s)

                                                
                                                
-- stdout --
	* [no-preload-686300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "no-preload-686300" primary control-plane node in "no-preload-686300" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:15:47.979577    2100 out.go:360] Setting OutFile to fd 1972 ...
	I1216 06:15:48.028755    2100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:15:48.028755    2100 out.go:374] Setting ErrFile to fd 1148...
	I1216 06:15:48.028755    2100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:15:48.043433    2100 out.go:368] Setting JSON to false
	I1216 06:15:48.046345    2100 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6769,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:15:48.046345    2100 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:15:48.049346    2100 out.go:179] * [no-preload-686300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:15:48.053458    2100 notify.go:221] Checking for updates...
	I1216 06:15:48.053458    2100 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:15:48.057632    2100 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:15:48.061089    2100 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:15:48.067048    2100 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:15:48.073242    2100 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:15:48.078055    2100 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:15:48.078831    2100 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:15:48.203359    2100 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:15:48.206357    2100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:15:48.473036    2100 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:15:48.450762731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:15:48.477180    2100 out.go:179] * Using the docker driver based on existing profile
	I1216 06:15:48.483381    2100 start.go:309] selected driver: docker
	I1216 06:15:48.483381    2100 start.go:927] validating driver "docker" against &{Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:15:48.483625    2100 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:15:48.579070    2100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:15:48.841358    2100 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:15:48.817390559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:15:48.842360    2100 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:15:48.842360    2100 cni.go:84] Creating CNI manager for ""
	I1216 06:15:48.842360    2100 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:15:48.842360    2100 start.go:353] cluster config:
	{Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:15:48.845357    2100 out.go:179] * Starting "no-preload-686300" primary control-plane node in "no-preload-686300" cluster
	I1216 06:15:48.847358    2100 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:15:48.850678    2100 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:15:48.852808    2100 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:15:48.852808    2100 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:15:48.853054    2100 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\config.json ...
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1216 06:15:48.853287    2100 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1216 06:15:49.196661    2100 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:15:49.196661    2100 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:15:49.196740    2100 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:15:49.196740    2100 start.go:360] acquireMachinesLock for no-preload-686300: {Name:mk990048edb42dd06e1fb0f2c86d8b2d42a7457e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:49.197120    2100 start.go:364] duration metric: took 270.7µs to acquireMachinesLock for "no-preload-686300"
	I1216 06:15:49.197296    2100 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:15:49.197316    2100 fix.go:54] fixHost starting: 
	I1216 06:15:49.209064    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:15:49.275055    2100 fix.go:112] recreateIfNeeded on no-preload-686300: state=Stopped err=<nil>
	W1216 06:15:49.275055    2100 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:15:49.279060    2100 out.go:252] * Restarting existing docker container for "no-preload-686300" ...
	I1216 06:15:49.285061    2100 cli_runner.go:164] Run: docker start no-preload-686300
	I1216 06:15:50.984124    2100 cli_runner.go:217] Completed: docker start no-preload-686300: (1.6990396s)
	I1216 06:15:50.997032    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:15:51.085214    2100 kic.go:430] container "no-preload-686300" state is running.
	I1216 06:15:51.092218    2100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-686300
	I1216 06:15:51.171055    2100 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\config.json ...
	I1216 06:15:51.173066    2100 machine.go:94] provisionDockerMachine start ...
	I1216 06:15:51.179056    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:51.268482    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:51.269474    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:51.269474    2100 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:15:51.272603    2100 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 06:15:52.151550    2100 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.151601    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1216 06:15:52.151601    2100 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 3.29827s
	I1216 06:15:52.151601    2100 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1216 06:15:52.167356    2100 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.167649    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1216 06:15:52.167649    2100 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.3143177s
	I1216 06:15:52.167649    2100 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1216 06:15:52.181257    2100 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.181257    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1216 06:15:52.182247    2100 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.3289154s
	I1216 06:15:52.182247    2100 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1216 06:15:52.196083    2100 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.196878    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1216 06:15:52.197497    2100 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.3441648s
	I1216 06:15:52.197541    2100 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1216 06:15:52.210930    2100 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.211765    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1216 06:15:52.211765    2100 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.3584325s
	I1216 06:15:52.211765    2100 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1216 06:15:52.235265    2100 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.235388    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1216 06:15:52.235388    2100 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.3820559s
	I1216 06:15:52.235388    2100 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1216 06:15:52.248968    2100 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.249896    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1216 06:15:52.249896    2100 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 3.3965632s
	I1216 06:15:52.249896    2100 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1216 06:15:52.258600    2100 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:15:52.258600    2100 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1216 06:15:52.258600    2100 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 3.4052668s
	I1216 06:15:52.258600    2100 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1216 06:15:52.258600    2100 cache.go:87] Successfully saved all images to host disk.
	I1216 06:15:54.446040    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-686300
	
	I1216 06:15:54.446581    2100 ubuntu.go:182] provisioning hostname "no-preload-686300"
	I1216 06:15:54.449644    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:54.512628    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:54.513628    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:54.513628    2100 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-686300 && echo "no-preload-686300" | sudo tee /etc/hostname
	I1216 06:15:54.720738    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-686300
	
	I1216 06:15:54.724727    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:54.784732    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:54.785726    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:54.785726    2100 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-686300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-686300/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-686300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:15:54.947054    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:15:54.947054    2100 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:15:54.947054    2100 ubuntu.go:190] setting up certificates
	I1216 06:15:54.947054    2100 provision.go:84] configureAuth start
	I1216 06:15:54.952073    2100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-686300
	I1216 06:15:55.012040    2100 provision.go:143] copyHostCerts
	I1216 06:15:55.012040    2100 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:15:55.012040    2100 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:15:55.013040    2100 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:15:55.014046    2100 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:15:55.014046    2100 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:15:55.014046    2100 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:15:55.015058    2100 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:15:55.015058    2100 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:15:55.015058    2100 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:15:55.016069    2100 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-686300 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-686300]
	I1216 06:15:55.208500    2100 provision.go:177] copyRemoteCerts
	I1216 06:15:55.214092    2100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:15:55.219092    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:55.292153    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:15:55.413148    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 06:15:55.445511    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:15:55.475517    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:15:55.500524    2100 provision.go:87] duration metric: took 553.4625ms to configureAuth
	I1216 06:15:55.500524    2100 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:15:55.501526    2100 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:15:55.505517    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:55.571516    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:55.571516    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:55.571516    2100 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:15:55.757941    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:15:55.757941    2100 ubuntu.go:71] root file system type: overlay
	I1216 06:15:55.757941    2100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:15:55.762931    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:55.827909    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:55.828738    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:55.828906    2100 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:15:56.009883    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:15:56.013880    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:56.082328    2100 main.go:143] libmachine: Using SSH client type: native
	I1216 06:15:56.082328    2100 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55112 <nil> <nil>}
	I1216 06:15:56.082328    2100 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:15:56.266049    2100 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:15:56.266049    2100 machine.go:97] duration metric: took 5.0929137s to provisionDockerMachine
	I1216 06:15:56.266049    2100 start.go:293] postStartSetup for "no-preload-686300" (driver="docker")
	I1216 06:15:56.266049    2100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:15:56.270047    2100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:15:56.275284    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:56.332073    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:15:56.466795    2100 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:15:56.478406    2100 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:15:56.478406    2100 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:15:56.478406    2100 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:15:56.478406    2100 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:15:56.479018    2100 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:15:56.483909    2100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:15:56.495492    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:15:56.519881    2100 start.go:296] duration metric: took 253.8281ms for postStartSetup
	I1216 06:15:56.524684    2100 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:15:56.527977    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:56.580650    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:15:56.705030    2100 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:15:56.713180    2100 fix.go:56] duration metric: took 7.5157617s for fixHost
	I1216 06:15:56.713180    2100 start.go:83] releasing machines lock for "no-preload-686300", held for 7.5159578s
	I1216 06:15:56.717871    2100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-686300
	I1216 06:15:56.776327    2100 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:15:56.780319    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:56.780319    2100 ssh_runner.go:195] Run: cat /version.json
	I1216 06:15:56.784319    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:15:56.837565    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:15:56.838791    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	W1216 06:15:56.954554    2100 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:15:56.960543    2100 ssh_runner.go:195] Run: systemctl --version
	I1216 06:15:56.975699    2100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:15:56.987286    2100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:15:56.992118    2100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:15:57.010839    2100 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:15:57.010839    2100 start.go:496] detecting cgroup driver to use...
	I1216 06:15:57.010839    2100 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:15:57.010839    2100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:15:57.037138    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 06:15:57.055131    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1216 06:15:57.067133    2100 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:15:57.067133    2100 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:15:57.069137    2100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:15:57.073127    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:15:57.091129    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:15:57.110137    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:15:57.128137    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:15:57.146128    2100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:15:57.163143    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:15:57.180135    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:15:57.196159    2100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:15:57.212135    2100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:15:57.227784    2100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:15:57.243311    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:15:57.379464    2100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:15:57.540234    2100 start.go:496] detecting cgroup driver to use...
	I1216 06:15:57.540234    2100 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:15:57.545024    2100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:15:57.569426    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:15:57.589440    2100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:15:57.641338    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:15:57.665986    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:15:57.688905    2100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:15:57.713525    2100 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:15:57.725526    2100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:15:57.736520    2100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 06:15:57.759338    2100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:15:57.866852    2100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:15:57.971868    2100 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:15:57.971868    2100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:15:58.001025    2100 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:15:58.022627    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:15:58.177131    2100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:16:00.683636    2100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5064702s)
	I1216 06:16:00.688431    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:16:00.709648    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:16:00.734513    2100 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 06:16:00.757614    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:16:00.780216    2100 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:16:00.916626    2100 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:16:01.079943    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:16:01.218481    2100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:16:01.242669    2100 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:16:01.266933    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:16:01.411497    2100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:16:01.516769    2100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:16:01.533957    2100 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:16:01.538498    2100 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:16:01.546160    2100 start.go:564] Will wait 60s for crictl version
	I1216 06:16:01.550366    2100 ssh_runner.go:195] Run: which crictl
	I1216 06:16:01.561419    2100 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:16:01.603331    2100 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:16:01.607249    2100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:16:01.653369    2100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:16:01.695223    2100 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 06:16:01.701025    2100 cli_runner.go:164] Run: docker exec -t no-preload-686300 dig +short host.docker.internal
	I1216 06:16:01.830212    2100 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:16:01.834723    2100 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:16:01.841898    2100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:16:01.861623    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:01.916448    2100 kubeadm.go:884] updating cluster {Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:16:01.916448    2100 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:16:01.921541    2100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:16:01.954027    2100 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:16:01.954027    2100 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:16:01.954027    2100 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1216 06:16:01.954601    2100 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-686300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:16:01.957547    2100 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:16:02.032477    2100 cni.go:84] Creating CNI manager for ""
	I1216 06:16:02.032477    2100 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:16:02.032477    2100 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:16:02.032477    2100 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-686300 NodeName:no-preload-686300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:16:02.033125    2100 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-686300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:16:02.037302    2100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:16:02.050520    2100 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:16:02.055773    2100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:16:02.069144    2100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 06:16:02.087988    2100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:16:02.107159    2100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1216 06:16:02.131154    2100 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:16:02.138592    2100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:16:02.164109    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:16:02.316398    2100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:16:02.337534    2100 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300 for IP: 192.168.76.2
	I1216 06:16:02.337534    2100 certs.go:195] generating shared ca certs ...
	I1216 06:16:02.337534    2100 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:02.338569    2100 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:16:02.338569    2100 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:16:02.338569    2100 certs.go:257] generating profile certs ...
	I1216 06:16:02.339339    2100 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\client.key
	I1216 06:16:02.339930    2100 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.key.de5dcef0
	I1216 06:16:02.340107    2100 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.key
	I1216 06:16:02.340956    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:16:02.341198    2100 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:16:02.341261    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:16:02.341499    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:16:02.341684    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:16:02.341684    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:16:02.341684    2100 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:16:02.343095    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:16:02.368546    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:16:02.399022    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:16:02.424980    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:16:02.453485    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:16:02.487356    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:16:02.515064    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:16:02.540749    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-686300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:16:02.565144    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:16:02.590623    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:16:02.617426    2100 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:16:02.640948    2100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:16:02.664234    2100 ssh_runner.go:195] Run: openssl version
	I1216 06:16:02.677958    2100 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:16:02.693840    2100 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:16:02.709650    2100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:16:02.716131    2100 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:16:02.720662    2100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:16:02.770093    2100 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:16:02.786257    2100 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:02.804343    2100 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:16:02.820485    2100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:02.827160    2100 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:02.831640    2100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:16:02.879678    2100 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:16:02.895769    2100 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:16:02.914074    2100 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:16:02.931602    2100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:16:02.941222    2100 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:16:02.944922    2100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:16:02.993477    2100 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:16:03.010028    2100 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:16:03.022808    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:16:03.076221    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:16:03.132138    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:16:03.193108    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:16:03.250120    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:16:03.324424    2100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:16:03.378991    2100 kubeadm.go:401] StartCluster: {Name:no-preload-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-686300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:16:03.383442    2100 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:16:03.426627    2100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:16:03.448420    2100 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:16:03.448441    2100 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:16:03.454343    2100 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:16:03.475733    2100 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:16:03.479687    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:03.530322    2100 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-686300" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:16:03.531312    2100 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-686300" cluster setting kubeconfig missing "no-preload-686300" context setting]
	I1216 06:16:03.531312    2100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:03.554910    2100 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:16:03.568450    2100 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1216 06:16:03.568450    2100 kubeadm.go:602] duration metric: took 120.007ms to restartPrimaryControlPlane
	I1216 06:16:03.568450    2100 kubeadm.go:403] duration metric: took 189.4567ms to StartCluster
	I1216 06:16:03.568450    2100 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:03.569459    2100 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:16:03.570898    2100 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:16:03.571666    2100 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:16:03.571666    2100 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:16:03.571666    2100 addons.go:70] Setting storage-provisioner=true in profile "no-preload-686300"
	I1216 06:16:03.571666    2100 addons.go:70] Setting dashboard=true in profile "no-preload-686300"
	I1216 06:16:03.571666    2100 addons.go:70] Setting default-storageclass=true in profile "no-preload-686300"
	I1216 06:16:03.571666    2100 addons.go:239] Setting addon dashboard=true in "no-preload-686300"
	I1216 06:16:03.571666    2100 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-686300"
	I1216 06:16:03.571666    2100 addons.go:239] Setting addon storage-provisioner=true in "no-preload-686300"
	W1216 06:16:03.571666    2100 addons.go:248] addon dashboard should already be in state true
	I1216 06:16:03.571666    2100 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:16:03.571666    2100 host.go:66] Checking if "no-preload-686300" exists ...
	I1216 06:16:03.571666    2100 host.go:66] Checking if "no-preload-686300" exists ...
	I1216 06:16:03.574673    2100 out.go:179] * Verifying Kubernetes components...
	I1216 06:16:03.582308    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:16:03.583195    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:16:03.583195    2100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:16:03.585220    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:16:03.654887    2100 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:16:03.654887    2100 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 06:16:03.657896    2100 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:16:03.655884    2100 addons.go:239] Setting addon default-storageclass=true in "no-preload-686300"
	I1216 06:16:03.657896    2100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:16:03.657896    2100 host.go:66] Checking if "no-preload-686300" exists ...
	I1216 06:16:03.661917    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:03.662889    2100 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 06:16:03.665900    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 06:16:03.665900    2100 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 06:16:03.667896    2100 cli_runner.go:164] Run: docker container inspect no-preload-686300 --format={{.State.Status}}
	I1216 06:16:03.671893    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:03.725618    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:16:03.726615    2100 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:16:03.726615    2100 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:16:03.728616    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:16:03.730622    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:03.778616    2100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:16:03.782622    2100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55112 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-686300\id_rsa Username:docker}
	I1216 06:16:03.806619    2100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-686300
	I1216 06:16:03.866065    2100 node_ready.go:35] waiting up to 6m0s for node "no-preload-686300" to be "Ready" ...
	I1216 06:16:03.887062    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:16:03.889063    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 06:16:03.889063    2100 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 06:16:03.915062    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 06:16:03.915062    2100 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 06:16:03.974753    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:16:03.984754    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 06:16:03.984754    2100 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 06:16:04.002751    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 06:16:04.002751    2100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 06:16:04.077037    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 06:16:04.077037    2100 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1216 06:16:04.097029    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.098040    2100 retry.go:31] will retry after 327.291867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.102038    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 06:16:04.102038    2100 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 06:16:04.162650    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 06:16:04.162730    2100 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1216 06:16:04.172452    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.172452    2100 retry.go:31] will retry after 162.955986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.190835    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 06:16:04.190835    2100 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 06:16:04.212428    2100 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 06:16:04.212428    2100 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 06:16:04.242274    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:04.333523    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.333523    2100 retry.go:31] will retry after 306.565091ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.339511    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:04.426748    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.426748    2100 retry.go:31] will retry after 243.308048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.429746    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:04.513792    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.513854    2100 retry.go:31] will retry after 338.54175ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.645290    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 06:16:04.674409    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:04.731418    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.731418    2100 retry.go:31] will retry after 504.836716ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:16:04.761411    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.761411    2100 retry.go:31] will retry after 362.968297ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.857829    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:04.963423    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:04.963423    2100 retry.go:31] will retry after 692.98574ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.128838    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:05.236152    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.236152    2100 retry.go:31] will retry after 1.059819013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.242380    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:05.336959    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.336959    2100 retry.go:31] will retry after 651.301512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.661242    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:05.772466    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.772466    2100 retry.go:31] will retry after 1.028057258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:05.992090    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:06.105856    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:06.105856    2100 retry.go:31] will retry after 1.077072034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:06.301919    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:06.434927    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:06.434927    2100 retry.go:31] will retry after 1.819517425s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:06.807395    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:06.909747    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:06.909747    2100 retry.go:31] will retry after 1.116729418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:07.188680    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:07.304775    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:07.304775    2100 retry.go:31] will retry after 990.350055ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:08.031059    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:08.142079    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:08.142133    2100 retry.go:31] will retry after 2.44300328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:08.261149    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:16:08.302604    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:08.363926    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:08.363926    2100 retry.go:31] will retry after 1.04966539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:16:08.409917    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:08.409917    2100 retry.go:31] will retry after 1.423403129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:09.418423    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:09.503734    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:09.503734    2100 retry.go:31] will retry after 3.436079802s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:09.838732    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:09.928361    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:09.928361    2100 retry.go:31] will retry after 2.530734224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:10.590016    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:10.672848    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:10.672848    2100 retry.go:31] will retry after 2.162609718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:12.464950    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:12.556706    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:12.557281    2100 retry.go:31] will retry after 3.536450628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:12.840427    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:12.923546    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:12.923546    2100 retry.go:31] will retry after 3.393774227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:12.944564    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:13.023675    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:13.023675    2100 retry.go:31] will retry after 2.67208837s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:16:13.900769    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:16:15.701144    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:15.783515    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:15.783515    2100 retry.go:31] will retry after 8.494358942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:16.098616    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:16.176246    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:16.176246    2100 retry.go:31] will retry after 4.833211983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:16.321854    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:16.401891    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:16.401985    2100 retry.go:31] will retry after 5.499383249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:21.013930    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:21.105795    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:21.105795    2100 retry.go:31] will retry after 12.679359091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:21.907394    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:22.057884    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:22.057884    2100 retry.go:31] will retry after 6.054265484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:16:23.931699    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:16:24.282931    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:24.372528    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:24.372528    2100 retry.go:31] will retry after 9.391926266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:28.115821    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:28.207393    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:28.207479    2100 retry.go:31] will retry after 10.262817225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:33.770047    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:16:33.793665    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:33.865051    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:33.865051    2100 retry.go:31] will retry after 16.595535776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:16:33.885355    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:33.885355    2100 retry.go:31] will retry after 11.967139659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:16:33.964837    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:16:38.476488    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:16:38.555984    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:38.555984    2100 retry.go:31] will retry after 27.908854902s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:16:43.997802    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:16:45.859452    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:16:45.980708    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:45.980708    2100 retry.go:31] will retry after 28.781525498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:50.466153    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:16:50.565884    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:16:50.565884    2100 retry.go:31] will retry after 25.197990186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:16:54.034119    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:17:04.069203    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:17:06.470180    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:17:06.550561    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:17:06.550677    2100 retry.go:31] will retry after 18.192310247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:17:14.103030    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:17:14.767395    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:17:14.860797    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:17:14.860797    2100 retry.go:31] will retry after 32.78252651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:17:15.769955    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:17:15.874160    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:17:15.874160    2100 retry.go:31] will retry after 22.812506175s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:17:24.136492    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:17:24.748010    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:17:24.827592    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:17:24.828591    2100 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1216 06:17:34.169950    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:17:38.691674    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:17:38.773335    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:17:38.773541    2100 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1216 06:17:44.206398    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:17:47.651079    2100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:17:47.775346    2100 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:17:47.775346    2100 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:17:47.779358    2100 out.go:179] * Enabled addons: 
	I1216 06:17:47.783373    2100 addons.go:530] duration metric: took 1m44.210288s for enable addons: enabled=[]
	W1216 06:17:54.240075    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:18:04.278421    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:18:14.311885    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:18:24.346274    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:18:34.382464    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:18:44.415240    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:18:54.451742    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:19:04.484363    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:19:14.525213    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:19:24.563047    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:19:34.591801    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:19:44.628333    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:19:54.662478    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:20:04.692598    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:20:14.728342    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:20:24.764402    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:20:34.799760    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:20:44.837911    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:20:54.874957    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:21:04.907245    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:21:14.943701    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:21:24.978331    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:21:35.013402    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:21:45.051007    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:21:55.087921    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	W1216 06:22:03.871305    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 06:22:03.871305    2100 node_ready.go:38] duration metric: took 6m0.0002926s for node "no-preload-686300" to be "Ready" ...
	I1216 06:22:03.874339    2100 out.go:203] 
	W1216 06:22:03.877050    2100 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:22:03.877050    2100 out.go:285] * 
	* 
	W1216 06:22:03.879403    2100 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:22:03.881203    2100 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-686300 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-686300
helpers_test.go:244: (dbg) docker inspect no-preload-686300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc",
	        "Created": "2025-12-16T06:04:57.603416998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 408764,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:15:50.357035984Z",
	            "FinishedAt": "2025-12-16T06:15:46.555763422Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hosts",
	        "LogPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc-json.log",
	        "Name": "/no-preload-686300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-686300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-686300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-686300",
	                "Source": "/var/lib/docker/volumes/no-preload-686300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-686300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-686300",
	                "name.minikube.sigs.k8s.io": "no-preload-686300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58679b470f3820ec221a43ce0cb2eeb96c16084feb347cd3733ff5e676214bcf",
	            "SandboxKey": "/var/run/docker/netns/58679b470f38",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55112"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55113"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55114"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-686300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0c88c2c552749da4ec31002e488ccc5e8184c9ce7ba360033e35a2aa2c5aead9",
	                    "EndpointID": "43959eb122225f782ad58d938dd1f7bfc24c45960ef7507609ea418938e5d63c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-686300",
	                        "f98924d46fbc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300: exit status 2 (626.4048ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25: (1.2066917s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-030800 sudo systemctl status kubelet --all --full --no-pager                                        │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo systemctl cat kubelet --no-pager                                                        │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo journalctl -xeu kubelet --all --full --no-pager                                         │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo cat /etc/kubernetes/kubelet.conf                                                        │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo cat /var/lib/kubelet/config.yaml                                                        │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo systemctl status docker --all --full --no-pager                                         │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo systemctl cat docker --no-pager                                                         │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo cat /etc/docker/daemon.json                                                             │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo docker system info                                                                      │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo systemctl status cri-docker --all --full --no-pager                                     │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo systemctl cat cri-docker --no-pager                                                     │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo cat /usr/lib/systemd/system/cri-docker.service                                          │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo cri-dockerd --version                                                                   │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo systemctl status containerd --all --full --no-pager                                     │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo systemctl cat containerd --no-pager                                                     │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo cat /lib/systemd/system/containerd.service                                              │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo cat /etc/containerd/config.toml                                                         │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo containerd config dump                                                                  │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo systemctl status crio --all --full --no-pager                                           │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │                     │
	│ ssh     │ -p enable-default-cni-030800 sudo systemctl cat crio --no-pager                                                           │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                 │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ ssh     │ -p enable-default-cni-030800 sudo crio config                                                                             │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ delete  │ -p enable-default-cni-030800                                                                                              │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │ 16 Dec 25 06:21 UTC │
	│ start   │ -p kubenet-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker │ kubenet-030800            │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:21:31
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:21:31.068463    4424 out.go:360] Setting OutFile to fd 1300 ...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.112163    4424 out.go:374] Setting ErrFile to fd 1224...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.126168    4424 out.go:368] Setting JSON to false
	I1216 06:21:31.128157    4424 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7112,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:21:31.129155    4424 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:21:31.133155    4424 out.go:179] * [kubenet-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:21:31.136368    4424 notify.go:221] Checking for updates...
	I1216 06:21:31.137751    4424 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:31.140914    4424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:21:31.143313    4424 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:21:31.145626    4424 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:21:31.147629    4424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:21:31.150478    4424 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151727    4424 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:21:31.272417    4424 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:21:31.275875    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.534539    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.516919297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.537553    4424 out.go:179] * Using the docker driver based on user configuration
	I1216 06:21:31.541211    4424 start.go:309] selected driver: docker
	I1216 06:21:31.541254    4424 start.go:927] validating driver "docker" against <nil>
	I1216 06:21:31.541286    4424 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:21:31.597589    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.842240    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.823958826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.842240    4424 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:21:31.843240    4424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:31.846236    4424 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:21:31.848222    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:21:31.848222    4424 start.go:353] cluster config:
	{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:21:31.851222    4424 out.go:179] * Starting "kubenet-030800" primary control-plane node in "kubenet-030800" cluster
	I1216 06:21:31.860233    4424 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:21:31.863229    4424 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:21:31.866228    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:31.866228    4424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:21:31.866228    4424 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:21:31.866228    4424 cache.go:65] Caching tarball of preloaded images
	I1216 06:21:31.866228    4424 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:21:31.866228    4424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:21:31.866228    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:31.866228    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json: {Name:mkd9bbe5249f898d86f7b7f3961735d2ed71d636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:31.935458    4424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:21:31.935458    4424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:21:31.935988    4424 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:21:31.936042    4424 start.go:360] acquireMachinesLock for kubenet-030800: {Name:mka6ae821c9ad8ee62e1a8eef0f2acffe81ebe64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:21:31.936202    4424 start.go:364] duration metric: took 160.2µs to acquireMachinesLock for "kubenet-030800"
	I1216 06:21:31.936352    4424 start.go:93] Provisioning new machine with config: &{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:31.936477    4424 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:21:30.055522    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:30.078529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:30.109519    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.109519    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:30.112511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:30.144765    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.144765    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:30.149313    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:30.181852    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.181852    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:30.184838    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:30.217464    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.217504    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:30.221332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:30.249301    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.249301    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:30.252441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:30.278998    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.278998    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:30.281997    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:30.312414    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.312414    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:30.318242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:30.357304    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.357361    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:30.357422    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:30.357422    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:30.746929    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:30.746929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:30.795665    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:30.795746    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:30.878691    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:30.878691    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:30.913679    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:30.913679    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:31.010602    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:33.516463    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:33.569431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:33.607164    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.607164    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:33.610177    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:33.645795    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.645795    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:33.649795    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:33.678786    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.678786    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:33.681789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:33.712090    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.712090    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:33.716091    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:33.749207    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.749207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:33.753072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:33.787281    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.787317    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:33.790849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:33.822451    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.822451    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:33.828957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:33.859827    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.859881    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:33.859881    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:33.859929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:33.922584    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:33.922584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:34.010640    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:34.010640    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:34.010640    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:34.047937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:34.047937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:34.108035    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:34.108113    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:31.939854    4424 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:21:31.939854    4424 start.go:159] libmachine.API.Create for "kubenet-030800" (driver="docker")
	I1216 06:21:31.939854    4424 client.go:173] LocalClient.Create starting
	I1216 06:21:31.940866    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.946190    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:21:32.002258    4424 cli_runner.go:211] docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:21:32.006251    4424 network_create.go:284] running [docker network inspect kubenet-030800] to gather additional debugging logs...
	I1216 06:21:32.006251    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800
	W1216 06:21:32.057274    4424 cli_runner.go:211] docker network inspect kubenet-030800 returned with exit code 1
	I1216 06:21:32.057274    4424 network_create.go:287] error running [docker network inspect kubenet-030800]: docker network inspect kubenet-030800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-030800 not found
	I1216 06:21:32.057274    4424 network_create.go:289] output of [docker network inspect kubenet-030800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-030800 not found
	
	** /stderr **
	I1216 06:21:32.061267    4424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:21:32.137401    4424 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.168856    4424 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.184860    4424 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.200856    4424 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.216426    4424 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.232146    4424 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016d96b0}
	I1216 06:21:32.232146    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 06:21:32.235443    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	W1216 06:21:32.288644    4424 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800 returned with exit code 1
	W1216 06:21:32.288644    4424 network_create.go:149] failed to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1216 06:21:32.288644    4424 network_create.go:116] failed to create docker network kubenet-030800 192.168.94.0/24, will retry: subnet is taken
	I1216 06:21:32.308048    4424 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.321168    4424 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015f57d0}
	I1216 06:21:32.321265    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1216 06:21:32.325637    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	I1216 06:21:32.469323    4424 network_create.go:108] docker network kubenet-030800 192.168.103.0/24 created
	I1216 06:21:32.469323    4424 kic.go:121] calculated static IP "192.168.103.2" for the "kubenet-030800" container
	I1216 06:21:32.483125    4424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:21:32.541557    4424 cli_runner.go:164] Run: docker volume create kubenet-030800 --label name.minikube.sigs.k8s.io=kubenet-030800 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:21:32.608360    4424 oci.go:103] Successfully created a docker volume kubenet-030800
	I1216 06:21:32.611360    4424 cli_runner.go:164] Run: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:21:34.117036    4424 cli_runner.go:217] Completed: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.5056549s)
	I1216 06:21:34.117036    4424 oci.go:107] Successfully prepared a docker volume kubenet-030800
	I1216 06:21:34.117036    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:34.117036    4424 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:21:34.121793    4424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:21:37.760556    7800 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:21:37.760556    7800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:21:37.761189    7800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:21:37.761753    7800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:21:37.761881    7800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:21:37.761881    7800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:21:37.764442    7800 out.go:252]   - Generating certificates and keys ...
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:21:37.765188    7800 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:21:37.765955    7800 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:21:37.766018    7800 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:21:37.766124    7800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:21:37.766165    7800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:21:37.766271    7800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:21:37.766333    7800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:21:37.766397    7800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:21:37.766458    7800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:21:37.770151    7800 out.go:252]   - Booting up control plane ...
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:21:37.770817    7800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:21:37.770952    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:21:37.771091    7800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:21:37.771167    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:21:37.771225    7800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:21:37.771366    7800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.004327208s
	I1216 06:21:37.771902    7800 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:21:37.772247    7800 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1216 06:21:37.772484    7800 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:21:37.772735    7800 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:21:37.773067    7800 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.101943404s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.591910767s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002177662s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:21:37.773799    7800 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:21:37.773799    7800 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:21:37.774455    7800 kubeadm.go:319] [mark-control-plane] Marking the node bridge-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:21:37.774523    7800 kubeadm.go:319] [bootstrap-token] Using token: lrkd8c.ky3vlqagn7chac73
	I1216 06:21:37.777890    7800 out.go:252]   - Configuring RBAC rules ...
	I1216 06:21:37.777890    7800 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:21:37.779666    7800 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:21:37.780278    7800 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:21:37.780278    7800 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:21:37.781243    7800 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--control-plane 
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:21:37.782257    7800 cni.go:84] Creating CNI manager for "bridge"
	I1216 06:21:37.785969    7800 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W1216 06:21:35.013402    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:37.791788    7800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 06:21:37.806804    7800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 06:21:37.825807    7800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-030800 minikube.k8s.io/updated_at=2025_12_16T06_21_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=bridge-030800 minikube.k8s.io/primary=true
	I1216 06:21:37.839814    7800 ops.go:34] apiserver oom_adj: -16
	I1216 06:21:38.032186    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:38.534048    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.035804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.534294    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:36.686452    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:36.704466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:36.733394    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.733394    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:36.737348    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:36.773510    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.773510    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:36.776509    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:36.805498    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.805498    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:36.809501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:36.845499    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.845499    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:36.849511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:36.879646    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.879646    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:36.884108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:36.915408    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.915408    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:36.920279    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:36.952754    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.952754    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:36.955734    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:36.987884    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.987884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:36.987884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:36.987884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:37.053646    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:37.053646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:37.093881    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:37.093881    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:37.179527    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:37.179584    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:37.179628    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:37.206261    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:37.206297    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:39.778747    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:39.802598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:39.832179    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.832179    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:39.836704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:39.869121    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.869121    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:39.873774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:39.909668    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.909668    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:39.914691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:39.947830    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.947830    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:39.951594    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:40.034177    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:40.535099    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.034558    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.535126    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.034691    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.533593    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.035143    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.831113    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:44.554108    7800 kubeadm.go:1114] duration metric: took 6.7282073s to wait for elevateKubeSystemPrivileges
	I1216 06:21:44.554108    7800 kubeadm.go:403] duration metric: took 23.3439157s to StartCluster
	I1216 06:21:44.554108    7800 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.554108    7800 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:44.555899    7800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.557179    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:21:44.557179    7800 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:44.557179    7800 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:21:44.557179    7800 addons.go:70] Setting storage-provisioner=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:239] Setting addon storage-provisioner=true in "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:70] Setting default-storageclass=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 host.go:66] Checking if "bridge-030800" exists ...
	I1216 06:21:44.557179    7800 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-030800"
	I1216 06:21:44.557179    7800 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.910438    7800 out.go:179] * Verifying Kubernetes components...
	I1216 06:21:39.982557    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.982557    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:39.986642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:40.018169    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.018169    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:40.021165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:40.051243    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.051243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:40.057090    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:40.084414    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.084414    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:40.084414    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:40.084414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:40.144414    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:40.144414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:40.179632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:40.179632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:40.269800    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:40.270323    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:40.270323    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:40.298399    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:40.298399    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:42.860131    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:42.883733    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:42.914204    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.914204    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:42.918228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:42.950380    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.950459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:42.954335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:42.985562    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.985562    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:42.988741    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:43.016773    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.016773    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:43.020957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:43.059042    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.059042    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:43.062546    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:43.091529    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.091529    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:43.095547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:43.122188    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.122188    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:43.126264    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:43.154603    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.154667    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:43.154687    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:43.154709    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:43.217437    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:43.217437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:43.256550    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:43.256550    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:43.334672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:43.334672    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:43.334672    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:43.363324    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:43.363324    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:44.625758    7800 addons.go:239] Setting addon default-storageclass=true in "bridge-030800"
	I1216 06:21:44.961765    7800 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:21:44.962159    7800 host.go:66] Checking if "bridge-030800" exists ...
	W1216 06:21:45.051007    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:45.413866    7800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:45.416342    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:45.428762    7800 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.428762    7800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:21:45.433231    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.481472    7800 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:45.481472    7800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:21:45.485567    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.487870    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.534738    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:21:45.540734    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.651776    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.743561    7800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:21:45.947134    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:48.661269    7800 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.1264885s)
	I1216 06:21:48.661269    7800 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.2776091s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.1858261s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.9822555s)
	I1216 06:21:48.933443    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:48.974829    7800 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:21:48.977844    7800 addons.go:530] duration metric: took 4.4206041s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:21:48.994296    7800 node_ready.go:35] waiting up to 15m0s for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 node_ready.go:49] node "bridge-030800" is "Ready"
	I1216 06:21:49.024312    7800 node_ready.go:38] duration metric: took 30.0163ms for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:21:49.030307    7800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.051593    7800 api_server.go:72] duration metric: took 4.4943521s to wait for apiserver process to appear ...
	I1216 06:21:49.051593    7800 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:21:49.051593    7800 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56268/healthz ...
	I1216 06:21:49.061499    7800 api_server.go:279] https://127.0.0.1:56268/healthz returned 200:
	ok
	I1216 06:21:49.063514    7800 api_server.go:141] control plane version: v1.34.2
	I1216 06:21:49.063514    7800 api_server.go:131] duration metric: took 11.9204ms to wait for apiserver health ...
	I1216 06:21:49.064510    7800 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:21:49.088115    7800 system_pods.go:59] 8 kube-system pods found
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.088115    7800 system_pods.go:74] duration metric: took 23.6038ms to wait for pod list to return data ...
	I1216 06:21:49.088115    7800 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:21:49.094110    7800 default_sa.go:45] found service account: "default"
	I1216 06:21:49.094110    7800 default_sa.go:55] duration metric: took 5.9949ms for default service account to be created ...
	I1216 06:21:49.094110    7800 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:21:49.100097    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.100097    7800 retry.go:31] will retry after 202.33386ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.170358    7800 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-030800" context rescaled to 1 replicas
	I1216 06:21:49.310950    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.310950    7800 retry.go:31] will retry after 302.122926ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.630338    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630577    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.630663    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.630695    7800 retry.go:31] will retry after 447.973015ms: missing components: kube-dns, kube-proxy
	I1216 06:21:45.929428    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:45.950755    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:45.982605    8452 logs.go:282] 0 containers: []
	W1216 06:21:45.982605    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:45.987711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:46.020649    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.020649    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:46.024931    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:46.058836    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.058836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:46.066651    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:46.094860    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.094860    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:46.098689    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:46.127246    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.127246    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:46.130937    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:46.159519    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.159519    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:46.163609    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:46.195483    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.195483    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:46.199178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:46.229349    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.229349    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:46.229349    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:46.229349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:46.292392    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:46.292392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:46.330903    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:46.330903    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:46.427283    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:46.427283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:46.427283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:46.458647    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:46.458647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:49.005293    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.027308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:49.061499    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.061499    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:49.064510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:49.094110    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.094110    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:49.097105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:49.126749    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.126749    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:49.132748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:49.169344    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.169344    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:49.174332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:49.209103    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.209103    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:49.214581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:49.248644    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.248644    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:49.251643    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:49.281632    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.281632    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:49.285645    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:49.316955    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.316955    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:49.316955    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:49.317942    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:49.391656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:49.391738    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:49.432724    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:49.432724    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:49.523989    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:49.523989    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:49.523989    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:49.552004    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:49.552004    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:48.467044    4424 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (14.3450525s)
	I1216 06:21:48.467044    4424 kic.go:203] duration metric: took 14.349809s to extract preloaded images to volume ...
	I1216 06:21:48.470844    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:48.730876    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:48.710057733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:48.733867    4424 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:21:48.983392    4424 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-030800 --name kubenet-030800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-030800 --network kubenet-030800 --ip 192.168.103.2 --volume kubenet-030800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:21:49.764686    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Running}}
	I1216 06:21:49.828590    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:49.890595    4424 cli_runner.go:164] Run: docker exec kubenet-030800 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:21:50.004225    4424 oci.go:144] the created container "kubenet-030800" has a running status.
	I1216 06:21:50.005228    4424 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.057161    4424 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:21:50.141101    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:50.207656    4424 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:21:50.207656    4424 kic_runner.go:114] Args: [docker exec --privileged kubenet-030800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:21:50.326664    4424 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.087090    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.087090    7800 retry.go:31] will retry after 426.637768ms: missing components: kube-dns, kube-proxy
	I1216 06:21:50.538640    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.538640    7800 retry.go:31] will retry after 479.139187ms: missing components: kube-dns
	I1216 06:21:51.025065    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.025065    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:51.025193    7800 retry.go:31] will retry after 758.159415ms: missing components: kube-dns
	I1216 06:21:51.791088    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Running
	I1216 06:21:51.791088    7800 system_pods.go:126] duration metric: took 2.6969413s to wait for k8s-apps to be running ...
	I1216 06:21:51.791088    7800 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:21:51.798336    7800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:21:51.818183    7800 system_svc.go:56] duration metric: took 27.0943ms WaitForService to wait for kubelet
	I1216 06:21:51.818183    7800 kubeadm.go:587] duration metric: took 7.2609035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:51.818183    7800 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:21:51.825244    7800 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:21:51.825244    7800 node_conditions.go:123] node cpu capacity is 16
	I1216 06:21:51.825244    7800 node_conditions.go:105] duration metric: took 7.0607ms to run NodePressure ...
	I1216 06:21:51.825244    7800 start.go:242] waiting for startup goroutines ...
	I1216 06:21:51.825244    7800 start.go:247] waiting for cluster config update ...
	I1216 06:21:51.825244    7800 start.go:256] writing updated cluster config ...
	I1216 06:21:51.833706    7800 ssh_runner.go:195] Run: rm -f paused
	I1216 06:21:51.841597    7800 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:21:51.851622    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:21:53.862268    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:52.109148    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:52.140748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:52.186855    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.186855    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:52.193111    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:52.227511    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.227511    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:52.232508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:52.265331    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.265331    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:52.270635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:52.301130    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.301130    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:52.307669    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:52.342623    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.342623    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:52.347794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:52.387246    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.387246    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:52.392629    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:52.445055    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.445143    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:52.449252    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:52.480071    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.480071    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:52.480071    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:52.480071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:52.547115    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:52.547115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:52.590630    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:52.590630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:52.690412    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:52.690412    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:52.690412    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:52.718573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:52.718573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:52.546527    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:52.603159    4424 machine.go:94] provisionDockerMachine start ...
	I1216 06:21:52.606161    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.662674    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.679442    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.679519    4424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:21:52.842464    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:52.842464    4424 ubuntu.go:182] provisioning hostname "kubenet-030800"
	I1216 06:21:52.846473    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.908771    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.908771    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.908771    4424 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-030800 && echo "kubenet-030800" | sudo tee /etc/hostname
	I1216 06:21:53.084692    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:53.088917    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.150284    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.150284    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.150284    4424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-030800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-030800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-030800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:21:53.322772    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:21:53.322772    4424 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:21:53.322772    4424 ubuntu.go:190] setting up certificates
	I1216 06:21:53.322772    4424 provision.go:84] configureAuth start
	I1216 06:21:53.326658    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:53.379472    4424 provision.go:143] copyHostCerts
	I1216 06:21:53.379472    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:21:53.379472    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:21:53.379472    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:21:53.381506    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:21:53.381506    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:21:53.382025    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:21:53.383238    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:21:53.383286    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:21:53.383622    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:21:53.384729    4424 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-030800 san=[127.0.0.1 192.168.103.2 kubenet-030800 localhost minikube]
	I1216 06:21:53.446404    4424 provision.go:177] copyRemoteCerts
	I1216 06:21:53.450578    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:21:53.453632    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.508049    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:53.625841    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:21:53.652177    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 06:21:53.678648    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:21:53.702593    4424 provision.go:87] duration metric: took 379.8156ms to configureAuth
	I1216 06:21:53.702593    4424 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:21:53.703116    4424 config.go:182] Loaded profile config "kubenet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:53.706020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.763080    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.763659    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.763659    4424 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:21:53.941197    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:21:53.941229    4424 ubuntu.go:71] root file system type: overlay
	I1216 06:21:53.941395    4424 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:21:53.945310    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.000318    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.000318    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.000318    4424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:21:54.194977    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:21:54.198986    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.262183    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.262873    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.262912    4424 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:21:55.764091    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:21:54.174803160 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:21:55.764091    4424 machine.go:97] duration metric: took 3.1608879s to provisionDockerMachine
	I1216 06:21:55.764091    4424 client.go:176] duration metric: took 23.8239056s to LocalClient.Create
	I1216 06:21:55.764091    4424 start.go:167] duration metric: took 23.8239056s to libmachine.API.Create "kubenet-030800"
	I1216 06:21:55.764091    4424 start.go:293] postStartSetup for "kubenet-030800" (driver="docker")
	I1216 06:21:55.764091    4424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:21:55.769330    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:21:55.774020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:55.832721    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:55.960433    4424 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:21:55.968801    4424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:21:55.968801    4424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:21:55.969505    4424 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:21:55.973822    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:21:55.985938    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:21:56.011522    4424 start.go:296] duration metric: took 247.4281ms for postStartSetup
	I1216 06:21:56.016962    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.071317    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:56.078704    4424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:21:56.082131    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	W1216 06:21:55.087921    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:56.146380    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.278810    4424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:21:56.289463    4424 start.go:128] duration metric: took 24.3526481s to createHost
	I1216 06:21:56.289463    4424 start.go:83] releasing machines lock for "kubenet-030800", held for 24.352923s
	I1216 06:21:56.293770    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.349762    4424 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:21:56.354527    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.355718    4424 ssh_runner.go:195] Run: cat /version.json
	I1216 06:21:56.359207    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.419217    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.420010    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.548149    4424 ssh_runner.go:195] Run: systemctl --version
	W1216 06:21:56.549226    4424 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:21:56.567514    4424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:21:56.574755    4424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:21:56.580435    4424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:21:56.633416    4424 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:21:56.633416    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:56.633416    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:56.633416    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:56.657618    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1216 06:21:56.658090    4424 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:21:56.658134    4424 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:21:56.678200    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:21:56.690681    4424 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:21:56.695430    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:21:56.714310    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.735757    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:21:56.754647    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.771876    4424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:21:56.790078    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:21:56.810936    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:21:56.828529    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:21:56.859717    4424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:21:56.876724    4424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:21:56.891719    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.036224    4424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:21:57.185425    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:57.185522    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:57.190092    4424 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:21:57.213249    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.239566    4424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:21:57.303231    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.326154    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:21:57.344861    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:57.372889    4424 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:21:57.386009    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:21:57.401220    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1216 06:21:57.422607    4424 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:21:57.590920    4424 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:21:57.727211    4424 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:21:57.727211    4424 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:21:57.751771    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:21:57.772961    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.912458    4424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:21:58.834645    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:21:58.856232    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:21:58.880727    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:58.906712    4424 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:21:59.052553    4424 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:21:59.194941    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.333924    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:21:59.357147    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:21:59.379570    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.513788    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:21:59.631489    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:59.649336    4424 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:21:59.653752    4424 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:21:59.660755    4424 start.go:564] Will wait 60s for crictl version
	I1216 06:21:59.665368    4424 ssh_runner.go:195] Run: which crictl
	I1216 06:21:59.677200    4424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:21:59.717428    4424 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:21:59.720622    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:21:59.765567    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	W1216 06:21:55.865199    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	W1216 06:21:58.365962    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:55.273773    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:55.297441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:55.334351    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.334404    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:55.338338    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:55.372344    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.372344    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:55.375335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:55.429711    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.429711    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:55.432707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:55.463415    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.463415    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:55.466882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:55.495871    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.495871    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:55.499782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:55.530135    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.530135    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:55.534032    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:55.561956    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.561956    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:55.567456    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:55.598684    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.598684    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:55.598684    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:55.598684    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:55.661553    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:55.661553    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:55.699330    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:55.699330    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:55.806271    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:55.806271    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:55.806271    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:55.839937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:55.839937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.400590    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:58.422580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:58.453081    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.453081    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:58.457176    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:58.482739    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.482739    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:58.486288    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:58.516198    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.516198    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:58.520374    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:58.550134    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.550134    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:58.553679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:58.585815    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.585815    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:58.589532    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:58.620180    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.620180    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:58.626021    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:58.656946    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.656946    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:58.659942    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:58.687618    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.687618    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:58.687618    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:58.687618    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:58.777493    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:58.777493    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:58.777493    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:58.805676    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:58.805676    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.860391    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:58.860391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:58.925444    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:58.925444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:59.807579    4424 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:21:59.810667    4424 cli_runner.go:164] Run: docker exec -t kubenet-030800 dig +short host.docker.internal
	I1216 06:21:59.962844    4424 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:21:59.967733    4424 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:21:59.974503    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:21:59.995371    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:00.053937    4424 kubeadm.go:884] updating cluster {Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:22:00.053937    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:22:00.057874    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.094105    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.094105    4424 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:22:00.097332    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.129189    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.129225    4424 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:22:00.129280    4424 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 docker true true} ...
	I1216 06:22:00.129486    4424 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:22:00.132350    4424 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:22:00.208072    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:22:00.208072    4424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:22:00.208072    4424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-030800 NodeName:kubenet-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:22:00.208072    4424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:22:00.213204    4424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:22:00.225061    4424 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:22:00.229012    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:22:00.242127    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (339 bytes)
	I1216 06:22:00.258591    4424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:22:00.278876    4424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 06:22:00.305788    4424 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:22:00.315868    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:22:00.339710    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:22:00.483171    4424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:22:00.505844    4424 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800 for IP: 192.168.103.2
	I1216 06:22:00.505844    4424 certs.go:195] generating shared ca certs ...
	I1216 06:22:00.505844    4424 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.506501    4424 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:22:00.507023    4424 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:22:00.507484    4424 certs.go:257] generating profile certs ...
	I1216 06:22:00.507484    4424 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key
	I1216 06:22:00.507484    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt with IP's: []
	I1216 06:22:00.552695    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt ...
	I1216 06:22:00.552695    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt: {Name:mk4783bd7e1619c0ea341eaca75005ddd88d5b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.553960    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key ...
	I1216 06:22:00.553960    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key: {Name:mk427571c1896a50b896e76c58a633b5512ad44e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.555335    4424 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8
	I1216 06:22:00.555661    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 06:22:00.581299    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 ...
	I1216 06:22:00.581299    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8: {Name:mk9cb22362f0ba7f5c0b5c6877c5c2e8d72eb278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.582304    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 ...
	I1216 06:22:00.582304    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8: {Name:mk2a3e21d232de7f748cffa074c96be0850cc9f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.583303    4424 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt
	I1216 06:22:00.599920    4424 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key
	I1216 06:22:00.600703    4424 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key
	I1216 06:22:00.601353    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt with IP's: []
	I1216 06:22:00.664564    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt ...
	I1216 06:22:00.664564    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt: {Name:mk02eb62f20a18ad60f930ae30a248a87b7cb658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.665010    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key ...
	I1216 06:22:00.665010    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key: {Name:mk8a8b2a6c6b1b3e2e2cc574e01303d6680bf793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.680006    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:22:00.680554    4424 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:22:00.680554    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:22:00.681404    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:22:00.683052    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:22:00.710388    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:22:00.737370    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:22:00.766290    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:22:00.790943    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:22:00.815072    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:22:00.839330    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:22:00.863340    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:22:00.921806    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:22:00.945068    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:22:00.972351    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:22:00.998813    4424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:22:01.025404    4424 ssh_runner.go:195] Run: openssl version
	I1216 06:22:01.039534    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.056142    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:22:01.077227    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.085140    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.089133    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	W1216 06:22:03.871305    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 06:22:03.871305    2100 node_ready.go:38] duration metric: took 6m0.0002926s for node "no-preload-686300" to be "Ready" ...
	I1216 06:22:03.874339    2100 out.go:203] 
	W1216 06:22:03.877050    2100 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:22:03.877050    2100 out.go:285] * 
	W1216 06:22:03.879403    2100 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:22:03.881203    2100 out.go:203] 
	W1216 06:22:00.861344    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:22:01.860562    7800 pod_ready.go:99] pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8s6v4" not found
	I1216 06:22:01.860562    7800 pod_ready.go:86] duration metric: took 10.0087717s for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:01.860562    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tcbrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:03.875170    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:01.467725    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:01.493737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:01.526377    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.526426    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:01.530527    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:01.563582    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.563582    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:01.567007    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:01.606460    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.606460    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:01.610155    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:01.647965    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.647965    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:01.652309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:01.691011    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.691011    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:01.695466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:01.736509    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.736509    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:01.739991    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:01.773642    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.773642    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:01.777617    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:01.811025    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.811141    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:01.811141    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:01.811141    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:01.881533    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:01.881533    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.919632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:01.919632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:02.020491    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:02.020538    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:02.020587    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:02.059933    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:02.060031    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:04.620893    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:04.645761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:04.680596    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.680596    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:04.684611    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:04.712607    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.712607    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:04.716737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:04.744218    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.744218    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:04.748501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:04.787600    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.787668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:04.791349    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:04.825500    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.825547    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:04.829098    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:04.878465    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.878465    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:04.881466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:04.910168    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.910168    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:04.914167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:04.949752    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.949810    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:04.949810    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:04.949873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> Docker <==
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570336952Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570433565Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570447467Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570465470Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570473171Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570498774Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570539380Z" level=info msg="Initializing buildkit"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.671982027Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680146533Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680337859Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680374664Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680404268Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:16:00 no-preload-686300 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:16:01 no-preload-686300 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:16:01 no-preload-686300 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:05.982321    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.983916    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.985274    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.986445    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.987544    8484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.633501] CPU: 10 PID: 466820 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f865800db20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f865800daf6.
	[  +0.000001] RSP: 002b:00007ffc8c624780 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000033] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.839091] CPU: 12 PID: 466960 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fa6af131b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fa6af131af6.
	[  +0.000001] RSP: 002b:00007ffe97387e50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:22:06 up  1:58,  0 user,  load average: 3.44, 4.29, 4.20
	Linux no-preload-686300 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:22:02 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:22:03 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 16 06:22:03 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:22:03 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:22:03 no-preload-686300 kubelet[8315]: E1216 06:22:03.625323    8315 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:22:03 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:22:03 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:22:04 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 16 06:22:04 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:22:04 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:22:04 no-preload-686300 kubelet[8329]: E1216 06:22:04.402027    8329 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:22:04 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:22:04 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:22:05 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 16 06:22:05 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:22:05 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:22:05 no-preload-686300 kubelet[8355]: E1216 06:22:05.162544    8355 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:22:05 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:22:05 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:22:05 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 16 06:22:05 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:22:05 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:22:05 no-preload-686300 kubelet[8451]: E1216 06:22:05.888431    8451 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:22:05 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:22:05 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300: exit status 2 (595.8173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-686300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (379.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (99.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-256200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1216 06:17:43.889720   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-256200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m37.347782s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_addons_e23971240287a88151a2b5edd52daaba3879ba4a_13.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-256200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-256200
helpers_test.go:244: (dbg) docker inspect newest-cni-256200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66",
	        "Created": "2025-12-16T06:09:14.512792797Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 365050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:09:14.825267122Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/hostname",
	        "HostsPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/hosts",
	        "LogPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66-json.log",
	        "Name": "/newest-cni-256200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-256200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-256200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-256200",
	                "Source": "/var/lib/docker/volumes/newest-cni-256200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-256200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-256200",
	                "name.minikube.sigs.k8s.io": "newest-cni-256200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "771bfa7da2ead2842ed10177b89bf5ef2e45e3b61880ef998eb1675462cefe49",
	            "SandboxKey": "/var/run/docker/netns/771bfa7da2ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54657"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54658"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54659"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54660"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54661"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-256200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c97a08422fb6ea0a0f62c56d96c89be84aa4e33beba1ccaa82b7390e64b42c8e",
	                    "EndpointID": "8751925f2ee7cf9dc88323a2eb80efce9560f4ef2a0abb3571b1e150a2032db4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-256200",
	                        "144d2cf5befb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200: exit status 6 (602.7366ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:19:13.505675    3432 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-256200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-256200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-256200 logs -n 25: (1.1317595s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                  │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p false-030800 sudo systemctl status kubelet --all --full --no-pager                                                                 │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo systemctl cat kubelet --no-pager                                                                                 │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo journalctl -xeu kubelet --all --full --no-pager                                                                  │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo cat /etc/kubernetes/kubelet.conf                                                                                 │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo cat /var/lib/kubelet/config.yaml                                                                                 │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo systemctl status docker --all --full --no-pager                                                                  │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo systemctl cat docker --no-pager                                                                                  │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo cat /etc/docker/daemon.json                                                                                      │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo docker system info                                                                                               │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo systemctl status cri-docker --all --full --no-pager                                                              │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo systemctl cat cri-docker --no-pager                                                                              │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                         │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                   │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:18 UTC │
	│ ssh     │ -p false-030800 sudo cri-dockerd --version                                                                                            │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:18 UTC │ 16 Dec 25 06:19 UTC │
	│ ssh     │ -p false-030800 sudo systemctl status containerd --all --full --no-pager                                                              │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │ 16 Dec 25 06:19 UTC │
	│ ssh     │ -p false-030800 sudo systemctl cat containerd --no-pager                                                                              │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │ 16 Dec 25 06:19 UTC │
	│ ssh     │ -p false-030800 sudo cat /lib/systemd/system/containerd.service                                                                       │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │ 16 Dec 25 06:19 UTC │
	│ ssh     │ -p false-030800 sudo cat /etc/containerd/config.toml                                                                                  │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │ 16 Dec 25 06:19 UTC │
	│ ssh     │ -p false-030800 sudo containerd config dump                                                                                           │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │ 16 Dec 25 06:19 UTC │
	│ ssh     │ -p false-030800 sudo systemctl status crio --all --full --no-pager                                                                    │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │                     │
	│ ssh     │ -p false-030800 sudo systemctl cat crio --no-pager                                                                                    │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │ 16 Dec 25 06:19 UTC │
	│ ssh     │ -p false-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                          │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │ 16 Dec 25 06:19 UTC │
	│ ssh     │ -p false-030800 sudo crio config                                                                                                      │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │ 16 Dec 25 06:19 UTC │
	│ delete  │ -p false-030800                                                                                                                       │ false-030800              │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │ 16 Dec 25 06:19 UTC │
	│ start   │ -p enable-default-cni-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker │ enable-default-cni-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:19:10
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:19:10.239463   14112 out.go:360] Setting OutFile to fd 1044 ...
	I1216 06:19:10.282470   14112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:19:10.282470   14112 out.go:374] Setting ErrFile to fd 1664...
	I1216 06:19:10.282470   14112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:19:10.296464   14112 out.go:368] Setting JSON to false
	I1216 06:19:10.299470   14112 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6971,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:19:10.299470   14112 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:19:10.304471   14112 out.go:179] * [enable-default-cni-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:19:10.307463   14112 notify.go:221] Checking for updates...
	I1216 06:19:10.307463   14112 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:19:10.309474   14112 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:19:10.312287   14112 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:19:10.314631   14112 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:19:10.316817   14112 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:19:10.320072   14112 config.go:182] Loaded profile config "flannel-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:19:10.320672   14112 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:19:10.320825   14112 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:19:10.320825   14112 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:19:10.445535   14112 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:19:10.449529   14112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:19:10.701086   14112 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:19:10.676525477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:19:10.704554   14112 out.go:179] * Using the docker driver based on user configuration
	I1216 06:19:10.708434   14112 start.go:309] selected driver: docker
	I1216 06:19:10.708434   14112 start.go:927] validating driver "docker" against <nil>
	I1216 06:19:10.708434   14112 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:19:10.749220   14112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:19:10.978901   14112 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:19:10.962859192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:19:10.978901   14112 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1216 06:19:10.979901   14112 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1216 06:19:10.979901   14112 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:19:10.981899   14112 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:19:10.983899   14112 cni.go:84] Creating CNI manager for "bridge"
	I1216 06:19:10.983899   14112 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 06:19:10.983899   14112 start.go:353] cluster config:
	{Name:enable-default-cni-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:19:10.986899   14112 out.go:179] * Starting "enable-default-cni-030800" primary control-plane node in "enable-default-cni-030800" cluster
	I1216 06:19:10.989902   14112 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:19:10.992904   14112 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:19:10.995900   14112 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:19:10.995900   14112 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:19:10.995900   14112 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:19:10.995900   14112 cache.go:65] Caching tarball of preloaded images
	I1216 06:19:10.995900   14112 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:19:10.995900   14112 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:19:10.995900   14112 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-030800\config.json ...
	I1216 06:19:10.996900   14112 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\enable-default-cni-030800\config.json: {Name:mk5434f953265dc60a4d677552cf03e3a93413cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:19:11.072353   14112 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:19:11.072353   14112 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:19:11.072353   14112 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:19:11.072353   14112 start.go:360] acquireMachinesLock for enable-default-cni-030800: {Name:mk620574012d9a5f64538f0f0ebd31b8b77b45e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:19:11.072353   14112 start.go:364] duration metric: took 0s to acquireMachinesLock for "enable-default-cni-030800"
	I1216 06:19:11.072353   14112 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-030800 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:19:11.072353   14112 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> Docker <==
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165095582Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165188891Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165199992Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165205393Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165211193Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165233596Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.165273599Z" level=info msg="Initializing buildkit"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.285487942Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.291596049Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.291751064Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.291846574Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:09:24 newest-cni-256200 dockerd[1193]: time="2025-12-16T06:09:24.291875877Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:09:24 newest-cni-256200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:09:25 newest-cni-256200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:09:25 newest-cni-256200 cri-dockerd[1487]: time="2025-12-16T06:09:25Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:09:25 newest-cni-256200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:19:14.551700   12675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:19:14.553414   12675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:19:14.555043   12675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:19:14.557731   12675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:19:14.559100   12675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.804728] CPU: 7 PID: 439149 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f0169e17b20
	[  +0.000009] Code: Unable to access opcode bytes at RIP 0x7f0169e17af6.
	[  +0.000001] RSP: 002b:00007ffd6e630ae0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.881903] CPU: 13 PID: 439298 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f36d6688b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f36d6688af6.
	[  +0.000001] RSP: 002b:00007ffffce3c0d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 06:19] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 06:19:14 up  1:55,  0 user,  load average: 4.95, 4.81, 4.31
	Linux newest-cni-256200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:19:11 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:19:11 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 452.
	Dec 16 06:19:11 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:19:11 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:19:11 newest-cni-256200 kubelet[12497]: E1216 06:19:11.892989   12497 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:19:11 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:19:11 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:19:12 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 453.
	Dec 16 06:19:12 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:19:12 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:19:12 newest-cni-256200 kubelet[12510]: E1216 06:19:12.641902   12510 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:19:12 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:19:12 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:19:13 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 454.
	Dec 16 06:19:13 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:19:13 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:19:13 newest-cni-256200 kubelet[12536]: E1216 06:19:13.400042   12536 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:19:13 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:19:13 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:19:14 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 455.
	Dec 16 06:19:14 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:19:14 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:19:14 newest-cni-256200 kubelet[12566]: E1216 06:19:14.147862   12566 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:19:14 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:19:14 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200: exit status 6 (560.7039ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 06:19:15.212599    1564 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-256200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-256200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (99.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (381.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-256200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0
E1216 06:19:29.118362   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-256200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m15.4496771s)

                                                
                                                
-- stdout --
	* [newest-cni-256200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "newest-cni-256200" primary control-plane node in "newest-cni-256200" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 06:19:19.929336    8452 out.go:360] Setting OutFile to fd 1948 ...
	I1216 06:19:19.975023    8452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:19:19.975023    8452 out.go:374] Setting ErrFile to fd 1668...
	I1216 06:19:19.975023    8452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:19:19.990114    8452 out.go:368] Setting JSON to false
	I1216 06:19:19.992506    8452 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6981,"bootTime":1765858978,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:19:19.992506    8452 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:19:19.996540    8452 out.go:179] * [newest-cni-256200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:19:20.002287    8452 notify.go:221] Checking for updates...
	I1216 06:19:20.006491    8452 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:19:20.011875    8452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:19:20.017385    8452 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:19:20.024066    8452 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:19:20.031064    8452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:19:20.037129    8452 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:19:20.038376    8452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:19:20.155597    8452 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:19:20.159374    8452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:19:20.401981    8452 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:104 SystemTime:2025-12-16 06:19:20.379255293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:19:20.414344    8452 out.go:179] * Using the docker driver based on existing profile
	I1216 06:19:20.421090    8452 start.go:309] selected driver: docker
	I1216 06:19:20.421125    8452 start.go:927] validating driver "docker" against &{Name:newest-cni-256200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:19:20.421332    8452 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:19:20.464906    8452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:19:20.715648    8452 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:104 SystemTime:2025-12-16 06:19:20.692486067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:19:20.716643    8452 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 06:19:20.716643    8452 cni.go:84] Creating CNI manager for ""
	I1216 06:19:20.716643    8452 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:19:20.716643    8452 start.go:353] cluster config:
	{Name:newest-cni-256200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:19:20.726643    8452 out.go:179] * Starting "newest-cni-256200" primary control-plane node in "newest-cni-256200" cluster
	I1216 06:19:20.730649    8452 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:19:20.735876    8452 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:19:20.741464    8452 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:19:20.741464    8452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:19:20.741464    8452 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 06:19:20.741999    8452 cache.go:65] Caching tarball of preloaded images
	I1216 06:19:20.742464    8452 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:19:20.742623    8452 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1216 06:19:20.742899    8452 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\config.json ...
	I1216 06:19:20.811422    8452 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:19:20.811422    8452 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:19:20.811422    8452 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:19:20.811422    8452 start.go:360] acquireMachinesLock for newest-cni-256200: {Name:mk3285fa9eff9b8fb8b7734006d0edc9845e0471 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:19:20.812411    8452 start.go:364] duration metric: took 988.9µs to acquireMachinesLock for "newest-cni-256200"
	I1216 06:19:20.812411    8452 start.go:96] Skipping create...Using existing machine configuration
	I1216 06:19:20.812411    8452 fix.go:54] fixHost starting: 
	I1216 06:19:20.819401    8452 cli_runner.go:164] Run: docker container inspect newest-cni-256200 --format={{.State.Status}}
	I1216 06:19:20.873072    8452 fix.go:112] recreateIfNeeded on newest-cni-256200: state=Stopped err=<nil>
	W1216 06:19:20.873072    8452 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 06:19:20.875993    8452 out.go:252] * Restarting existing docker container for "newest-cni-256200" ...
	I1216 06:19:20.879548    8452 cli_runner.go:164] Run: docker start newest-cni-256200
	I1216 06:19:21.874160    8452 cli_runner.go:164] Run: docker container inspect newest-cni-256200 --format={{.State.Status}}
	I1216 06:19:21.929164    8452 kic.go:430] container "newest-cni-256200" state is running.
	I1216 06:19:21.934748    8452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256200
	I1216 06:19:21.989089    8452 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\config.json ...
	I1216 06:19:21.991083    8452 machine.go:94] provisionDockerMachine start ...
	I1216 06:19:21.994069    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:22.054071    8452 main.go:143] libmachine: Using SSH client type: native
	I1216 06:19:22.054071    8452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55872 <nil> <nil>}
	I1216 06:19:22.054071    8452 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:19:22.057070    8452 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1216 06:19:25.226271    8452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256200
	
	I1216 06:19:25.226363    8452 ubuntu.go:182] provisioning hostname "newest-cni-256200"
	I1216 06:19:25.229611    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:25.314961    8452 main.go:143] libmachine: Using SSH client type: native
	I1216 06:19:25.315677    8452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55872 <nil> <nil>}
	I1216 06:19:25.315677    8452 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-256200 && echo "newest-cni-256200" | sudo tee /etc/hostname
	I1216 06:19:25.487122    8452 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-256200
	
	I1216 06:19:25.491675    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:25.553610    8452 main.go:143] libmachine: Using SSH client type: native
	I1216 06:19:25.554116    8452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55872 <nil> <nil>}
	I1216 06:19:25.554179    8452 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-256200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-256200/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-256200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:19:25.728935    8452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:19:25.729005    8452 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:19:25.729087    8452 ubuntu.go:190] setting up certificates
	I1216 06:19:25.729134    8452 provision.go:84] configureAuth start
	I1216 06:19:25.733000    8452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256200
	I1216 06:19:25.784306    8452 provision.go:143] copyHostCerts
	I1216 06:19:25.784890    8452 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:19:25.784924    8452 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:19:25.785183    8452 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:19:25.785772    8452 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:19:25.785772    8452 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:19:25.785772    8452 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:19:25.787106    8452 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:19:25.787133    8452 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:19:25.787401    8452 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:19:25.787803    8452 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-256200 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-256200]
	I1216 06:19:25.847253    8452 provision.go:177] copyRemoteCerts
	I1216 06:19:25.852271    8452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:19:25.855526    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:25.909105    8452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55872 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	I1216 06:19:26.042071    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:19:26.073213    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 06:19:26.102442    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:19:26.127206    8452 provision.go:87] duration metric: took 398.0665ms to configureAuth
	I1216 06:19:26.127206    8452 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:19:26.127742    8452 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:19:26.131679    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:26.202017    8452 main.go:143] libmachine: Using SSH client type: native
	I1216 06:19:26.203024    8452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55872 <nil> <nil>}
	I1216 06:19:26.203024    8452 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:19:26.414560    8452 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:19:26.414560    8452 ubuntu.go:71] root file system type: overlay
	I1216 06:19:26.414560    8452 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:19:26.417545    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:26.500249    8452 main.go:143] libmachine: Using SSH client type: native
	I1216 06:19:26.500249    8452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55872 <nil> <nil>}
	I1216 06:19:26.500249    8452 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:19:26.716318    8452 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:19:26.721318    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:26.778021    8452 main.go:143] libmachine: Using SSH client type: native
	I1216 06:19:26.779025    8452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 55872 <nil> <nil>}
	I1216 06:19:26.779025    8452 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:19:26.971443    8452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:19:26.971443    8452 machine.go:97] duration metric: took 4.9802907s to provisionDockerMachine
	I1216 06:19:26.971443    8452 start.go:293] postStartSetup for "newest-cni-256200" (driver="docker")
	I1216 06:19:26.971443    8452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:19:26.976884    8452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:19:26.979546    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:27.042287    8452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55872 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	I1216 06:19:27.185629    8452 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:19:27.194980    8452 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:19:27.195070    8452 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:19:27.195070    8452 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:19:27.195545    8452 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:19:27.196375    8452 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:19:27.202149    8452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:19:27.275363    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:19:27.362204    8452 start.go:296] duration metric: took 390.7553ms for postStartSetup
	I1216 06:19:27.373624    8452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:19:27.377595    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:27.451948    8452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55872 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	I1216 06:19:27.572184    8452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:19:27.583193    8452 fix.go:56] duration metric: took 6.7706886s for fixHost
	I1216 06:19:27.583193    8452 start.go:83] releasing machines lock for "newest-cni-256200", held for 6.7706886s
	I1216 06:19:27.586190    8452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-256200
	I1216 06:19:27.639195    8452 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:19:27.645198    8452 ssh_runner.go:195] Run: cat /version.json
	I1216 06:19:27.645198    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:27.650192    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:27.708202    8452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55872 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	I1216 06:19:27.711188    8452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55872 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	W1216 06:19:27.828279    8452 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:19:27.833674    8452 ssh_runner.go:195] Run: systemctl --version
	I1216 06:19:27.854857    8452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:19:27.866813    8452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:19:27.872820    8452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:19:27.886816    8452 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 06:19:27.886816    8452 start.go:496] detecting cgroup driver to use...
	I1216 06:19:27.886816    8452 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:19:27.886816    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:19:27.914813    8452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1216 06:19:27.932813    8452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1216 06:19:27.947241    8452 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:19:27.947241    8452 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:19:27.954106    8452 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:19:27.960100    8452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:19:27.985100    8452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:19:28.005093    8452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:19:28.022120    8452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:19:28.041094    8452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:19:28.061087    8452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:19:28.085087    8452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:19:28.110089    8452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:19:28.129096    8452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:19:28.149595    8452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:19:28.170149    8452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:19:28.347023    8452 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:19:28.508616    8452 start.go:496] detecting cgroup driver to use...
	I1216 06:19:28.508616    8452 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:19:28.512612    8452 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:19:28.539608    8452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:19:28.573556    8452 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:19:28.654887    8452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:19:28.681644    8452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:19:28.701555    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:19:28.732267    8452 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:19:28.745270    8452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:19:28.760935    8452 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1216 06:19:28.787942    8452 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:19:28.983797    8452 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:19:29.134368    8452 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:19:29.135365    8452 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:19:29.165446    8452 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:19:29.187969    8452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:19:29.373169    8452 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:19:30.304230    8452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:19:30.335240    8452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:19:30.366132    8452 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1216 06:19:30.397358    8452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:19:30.422549    8452 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:19:30.591339    8452 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:19:30.751746    8452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:19:30.920315    8452 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:19:30.946936    8452 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:19:30.968498    8452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:19:31.134733    8452 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:19:31.255747    8452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:19:31.282434    8452 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:19:31.287440    8452 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:19:31.294432    8452 start.go:564] Will wait 60s for crictl version
	I1216 06:19:31.299432    8452 ssh_runner.go:195] Run: which crictl
	I1216 06:19:31.311444    8452 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:19:31.352359    8452 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:19:31.358737    8452 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:19:31.404954    8452 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:19:31.447829    8452 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1216 06:19:31.451956    8452 cli_runner.go:164] Run: docker exec -t newest-cni-256200 dig +short host.docker.internal
	I1216 06:19:31.601261    8452 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:19:31.605258    8452 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:19:31.611255    8452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:19:31.628257    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:31.692917    8452 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 06:19:31.695937    8452 kubeadm.go:884] updating cluster {Name:newest-cni-256200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:19:31.695937    8452 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 06:19:31.699919    8452 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:19:31.729994    8452 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:19:31.729994    8452 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:19:31.733897    8452 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:19:31.777112    8452 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:19:31.777112    8452 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:19:31.777112    8452 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 docker true true} ...
	I1216 06:19:31.778457    8452 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-256200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:19:31.781916    8452 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:19:31.885770    8452 cni.go:84] Creating CNI manager for ""
	I1216 06:19:31.885770    8452 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 06:19:31.885770    8452 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1216 06:19:31.885770    8452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-256200 NodeName:newest-cni-256200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:19:31.885770    8452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-256200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:19:31.892763    8452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1216 06:19:31.911145    8452 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:19:31.916194    8452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:19:31.927195    8452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1216 06:19:31.944198    8452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1216 06:19:31.964256    8452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1216 06:19:31.985847    8452 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:19:31.993353    8452 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:19:32.012895    8452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:19:32.166744    8452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:19:32.192569    8452 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200 for IP: 192.168.94.2
	I1216 06:19:32.192569    8452 certs.go:195] generating shared ca certs ...
	I1216 06:19:32.192569    8452 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:19:32.192569    8452 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:19:32.193948    8452 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:19:32.194103    8452 certs.go:257] generating profile certs ...
	I1216 06:19:32.194275    8452 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\client.key
	I1216 06:19:32.194811    8452 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.key.6f0b4644
	I1216 06:19:32.194965    8452 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\proxy-client.key
	I1216 06:19:32.195591    8452 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:19:32.195591    8452 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:19:32.195591    8452 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:19:32.196304    8452 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:19:32.196570    8452 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:19:32.196847    8452 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:19:32.197302    8452 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:19:32.198505    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:19:32.228574    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:19:32.259522    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:19:32.292381    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:19:32.322576    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 06:19:32.355221    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 06:19:32.383210    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:19:32.409207    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-256200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 06:19:32.436295    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:19:32.463661    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:19:32.491061    8452 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:19:32.517992    8452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:19:32.542862    8452 ssh_runner.go:195] Run: openssl version
	I1216 06:19:32.561941    8452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:19:32.581031    8452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:19:32.598535    8452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:19:32.608329    8452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:19:32.614169    8452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:19:32.675988    8452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:19:32.691976    8452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:19:32.706975    8452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:19:32.723694    8452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:19:32.730277    8452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:19:32.734273    8452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:19:32.801298    8452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:19:32.817157    8452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:19:32.833032    8452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:19:32.851935    8452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:19:32.861721    8452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:19:32.868049    8452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:19:32.922099    8452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:19:32.946140    8452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:19:32.959124    8452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 06:19:33.015146    8452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 06:19:33.064154    8452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 06:19:33.129153    8452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 06:19:33.212180    8452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 06:19:33.281909    8452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 06:19:33.327296    8452 kubeadm.go:401] StartCluster: {Name:newest-cni-256200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-256200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:19:33.331385    8452 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:19:33.375309    8452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:19:33.387167    8452 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 06:19:33.387167    8452 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 06:19:33.391163    8452 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 06:19:33.402166    8452 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 06:19:33.405163    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:33.465971    8452 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-256200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:19:33.466734    8452 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-256200" cluster setting kubeconfig missing "newest-cni-256200" context setting]
	I1216 06:19:33.467785    8452 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:19:33.491878    8452 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 06:19:33.505528    8452 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1216 06:19:33.505631    8452 kubeadm.go:602] duration metric: took 118.4619ms to restartPrimaryControlPlane
	I1216 06:19:33.505631    8452 kubeadm.go:403] duration metric: took 178.3322ms to StartCluster
	I1216 06:19:33.505671    8452 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:19:33.505805    8452 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:19:33.507569    8452 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:19:33.508494    8452 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:19:33.508494    8452 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:19:33.508677    8452 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-256200"
	I1216 06:19:33.508743    8452 addons.go:70] Setting dashboard=true in profile "newest-cni-256200"
	I1216 06:19:33.508743    8452 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-256200"
	I1216 06:19:33.508743    8452 addons.go:239] Setting addon dashboard=true in "newest-cni-256200"
	W1216 06:19:33.508795    8452 addons.go:248] addon dashboard should already be in state true
	I1216 06:19:33.508743    8452 addons.go:70] Setting default-storageclass=true in profile "newest-cni-256200"
	I1216 06:19:33.508897    8452 host.go:66] Checking if "newest-cni-256200" exists ...
	I1216 06:19:33.508897    8452 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-256200"
	I1216 06:19:33.508795    8452 host.go:66] Checking if "newest-cni-256200" exists ...
	I1216 06:19:33.508897    8452 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:19:33.511343    8452 out.go:179] * Verifying Kubernetes components...
	I1216 06:19:33.518220    8452 cli_runner.go:164] Run: docker container inspect newest-cni-256200 --format={{.State.Status}}
	I1216 06:19:33.518220    8452 cli_runner.go:164] Run: docker container inspect newest-cni-256200 --format={{.State.Status}}
	I1216 06:19:33.520217    8452 cli_runner.go:164] Run: docker container inspect newest-cni-256200 --format={{.State.Status}}
	I1216 06:19:33.520217    8452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:19:33.588461    8452 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:19:33.588461    8452 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 06:19:33.590461    8452 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:19:33.588461    8452 addons.go:239] Setting addon default-storageclass=true in "newest-cni-256200"
	I1216 06:19:33.590461    8452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:19:33.590461    8452 host.go:66] Checking if "newest-cni-256200" exists ...
	I1216 06:19:33.594459    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:33.594459    8452 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1216 06:19:33.597453    8452 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 06:19:33.597453    8452 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 06:19:33.598459    8452 cli_runner.go:164] Run: docker container inspect newest-cni-256200 --format={{.State.Status}}
	I1216 06:19:33.600456    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:33.655856    8452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55872 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	I1216 06:19:33.655856    8452 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:19:33.655856    8452 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:19:33.660861    8452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55872 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	I1216 06:19:33.660861    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:33.712856    8452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:19:33.713860    8452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55872 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-256200\id_rsa Username:docker}
	I1216 06:19:33.780229    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:19:33.783235    8452 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 06:19:33.783235    8452 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 06:19:33.802858    8452 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 06:19:33.802858    8452 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 06:19:33.814904    8452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-256200
	I1216 06:19:33.838221    8452 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 06:19:33.838221    8452 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 06:19:33.867953    8452 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 06:19:33.867953    8452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 06:19:33.868965    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:19:33.879960    8452 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:19:33.882960    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:33.951909    8452 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 06:19:33.951909    8452 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1216 06:19:33.954836    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:33.954836    8452 retry.go:31] will retry after 358.338358ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:33.976882    8452 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 06:19:33.976882    8452 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 06:19:34.048223    8452 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 06:19:34.048223    8452 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 06:19:34.075744    8452 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 06:19:34.075744    8452 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1216 06:19:34.077004    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:34.077089    8452 retry.go:31] will retry after 220.977295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:34.094410    8452 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 06:19:34.094410    8452 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 06:19:34.125006    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:19:34.212812    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:34.212812    8452 retry.go:31] will retry after 174.083601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:34.301436    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:19:34.317419    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:19:34.383295    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:34.391300    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:19:34.397300    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:34.397300    8452 retry.go:31] will retry after 237.009001ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:19:34.405300    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:34.405300    8452 retry.go:31] will retry after 525.625187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:19:34.516299    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:34.516299    8452 retry.go:31] will retry after 481.268776ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:34.643649    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:19:34.754883    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:34.754883    8452 retry.go:31] will retry after 839.794187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:34.887057    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:34.937456    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:19:35.001828    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:19:35.016822    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:35.016822    8452 retry.go:31] will retry after 790.635256ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:19:35.092621    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:35.092621    8452 retry.go:31] will retry after 283.48967ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:35.384267    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 06:19:35.386654    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 06:19:35.480978    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:35.481045    8452 retry.go:31] will retry after 676.060887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:35.600574    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:19:35.716585    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:35.716585    8452 retry.go:31] will retry after 670.54657ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:35.812790    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:19:35.884397    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 06:19:35.900383    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:35.900383    8452 retry.go:31] will retry after 850.776825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:36.161825    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:19:36.245574    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:36.245683    8452 retry.go:31] will retry after 1.526723775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:36.387302    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:36.391815    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:19:36.487187    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:36.487187    8452 retry.go:31] will retry after 1.537973477s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:36.755957    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:19:36.836171    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:36.836171    8452 retry.go:31] will retry after 1.857707235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:36.884528    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:37.386466    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:37.777262    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:19:37.883558    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:37.883558    8452 retry.go:31] will retry after 1.256892338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:37.884565    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:38.030151    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:19:38.119107    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:38.119107    8452 retry.go:31] will retry after 1.986304777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:38.385431    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:38.698278    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:19:38.806785    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:38.806785    8452 retry.go:31] will retry after 1.711490783s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:38.885801    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:39.146235    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:19:39.276417    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:39.276417    8452 retry.go:31] will retry after 1.953276882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:39.386552    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:39.887117    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:40.110583    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:19:40.218685    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:40.218685    8452 retry.go:31] will retry after 2.133626211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:40.384997    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:40.523534    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:19:40.614154    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:40.614154    8452 retry.go:31] will retry after 1.951626586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:40.884573    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:41.235499    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:19:41.319707    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:41.319740    8452 retry.go:31] will retry after 5.959335233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:41.387440    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:41.885075    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:42.359501    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:19:42.385257    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 06:19:42.443945    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:42.443945    8452 retry.go:31] will retry after 3.143831307s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:42.571152    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:19:42.663768    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:42.663849    8452 retry.go:31] will retry after 5.38350851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:42.884101    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:43.385584    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:43.891414    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:44.386300    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:44.884551    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:45.384866    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:45.592651    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:19:45.670955    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:45.670955    8452 retry.go:31] will retry after 6.16949463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:45.885534    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:46.385097    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:46.886547    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:47.284483    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:19:47.373338    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:47.373338    8452 retry.go:31] will retry after 5.690212299s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:47.385624    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:47.886158    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:48.053007    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:19:48.134525    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:48.134525    8452 retry.go:31] will retry after 4.10666026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:48.385259    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:48.885834    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:49.383763    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:49.886969    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:50.384954    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:50.885126    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:51.386223    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:51.850869    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:19:51.885494    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 06:19:51.946876    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:51.946876    8452 retry.go:31] will retry after 9.782225452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:52.247508    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:19:52.329638    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:52.329638    8452 retry.go:31] will retry after 12.711172126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:52.384492    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:52.885170    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:53.068518    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:19:53.151469    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:53.151469    8452 retry.go:31] will retry after 9.566959493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:19:53.384498    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:53.886216    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:54.385137    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:54.884682    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:55.385131    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:55.883963    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:56.385827    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:56.886000    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:57.388314    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:57.884683    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:58.384578    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:58.885480    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:59.385500    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:19:59.884979    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:00.386696    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:00.886102    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:01.387781    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:01.735017    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:20:01.853120    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:01.853120    8452 retry.go:31] will retry after 8.689712169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:01.886107    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:02.388176    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:02.722647    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:20:02.842607    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:02.842607    8452 retry.go:31] will retry after 12.365283122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:02.886624    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:03.385438    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:03.885354    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:04.385148    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:04.887482    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:05.045360    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:20:05.149630    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:05.149630    8452 retry.go:31] will retry after 7.58803718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:05.386128    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:05.885695    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:06.386997    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:06.884935    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:07.385702    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:07.886496    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:08.385254    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:08.885202    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:09.387023    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:09.886946    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:10.385267    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:10.548057    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:20:10.628815    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:10.628815    8452 retry.go:31] will retry after 29.313903521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:10.885113    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:11.385109    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:11.886102    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:12.385592    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:12.745281    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:20:12.822727    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:12.822727    8452 retry.go:31] will retry after 21.640254894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:12.885298    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:13.386045    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:13.885596    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:14.385362    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:14.885169    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:15.213138    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1216 06:20:15.300911    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:15.300980    8452 retry.go:31] will retry after 18.522982726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:15.385633    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:15.885081    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:16.385302    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:16.886746    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:17.385445    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:17.885581    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:18.385517    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:18.885679    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:19.386023    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:19.885345    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:20.385658    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:20.886314    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:21.387814    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:21.884739    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:22.384282    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:22.885671    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:23.386840    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:23.885971    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:24.386080    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:24.885602    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:25.384464    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:25.885714    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:26.385517    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:26.886731    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:27.385672    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:27.885549    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:28.385654    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:28.886695    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:29.386503    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:29.884261    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:30.385233    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:30.886184    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:31.385782    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:31.884373    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:32.386244    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:32.884951    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:33.386936    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:33.829158    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 06:20:33.884185    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:20:33.922145    8452 logs.go:282] 0 containers: []
	W1216 06:20:33.922145    8452 logs.go:284] No container was found matching "kube-apiserver"
	W1216 06:20:33.922145    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:33.922145    8452 retry.go:31] will retry after 34.17490091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:33.928410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:20:33.966642    8452 logs.go:282] 0 containers: []
	W1216 06:20:33.966642    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:20:33.969636    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:20:33.999639    8452 logs.go:282] 0 containers: []
	W1216 06:20:33.999639    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:20:34.002639    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:20:34.036799    8452 logs.go:282] 0 containers: []
	W1216 06:20:34.036799    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:20:34.041516    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:20:34.077744    8452 logs.go:282] 0 containers: []
	W1216 06:20:34.077744    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:20:34.080737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:20:34.112735    8452 logs.go:282] 0 containers: []
	W1216 06:20:34.112735    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:20:34.116732    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:20:34.149846    8452 logs.go:282] 0 containers: []
	W1216 06:20:34.149846    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:20:34.153242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:20:34.181219    8452 logs.go:282] 0 containers: []
	W1216 06:20:34.181219    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:20:34.181219    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:20:34.181219    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:20:34.241348    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:20:34.241413    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:20:34.283163    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:20:34.283163    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:20:34.375792    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:20:34.367813    3405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:34.368872    3405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:34.369815    3405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:34.372013    3405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:34.373093    3405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:20:34.367813    3405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:34.368872    3405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:34.369815    3405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:34.372013    3405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:34.373093    3405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:20:34.375792    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:20:34.375792    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:20:34.402397    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:20:34.402397    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:20:34.467474    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:20:34.562705    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:34.562705    8452 retry.go:31] will retry after 47.613646072s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:36.993599    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:37.019108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:20:37.050505    8452 logs.go:282] 0 containers: []
	W1216 06:20:37.050505    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:20:37.054984    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:20:37.087723    8452 logs.go:282] 0 containers: []
	W1216 06:20:37.087723    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:20:37.091140    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:20:37.123446    8452 logs.go:282] 0 containers: []
	W1216 06:20:37.123483    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:20:37.127137    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:20:37.164230    8452 logs.go:282] 0 containers: []
	W1216 06:20:37.164350    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:20:37.168290    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:20:37.198153    8452 logs.go:282] 0 containers: []
	W1216 06:20:37.198153    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:20:37.201152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:20:37.234157    8452 logs.go:282] 0 containers: []
	W1216 06:20:37.234214    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:20:37.238171    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:20:37.270100    8452 logs.go:282] 0 containers: []
	W1216 06:20:37.270100    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:20:37.274805    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:20:37.307350    8452 logs.go:282] 0 containers: []
	W1216 06:20:37.307350    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:20:37.307350    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:20:37.307350    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:20:37.385272    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:20:37.385272    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:20:37.423273    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:20:37.423273    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:20:37.524010    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:20:37.516750    3613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:37.517932    3613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:37.519011    3613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:37.520069    3613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:37.521136    3613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:20:37.516750    3613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:37.517932    3613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:37.519011    3613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:37.520069    3613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:37.521136    3613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:20:37.524010    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:20:37.524010    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:20:37.561734    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:20:37.561795    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:20:39.947156    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:20:40.038859    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:40.038859    8452 retry.go:31] will retry after 47.720606589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1216 06:20:40.129721    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:40.151695    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:20:40.182697    8452 logs.go:282] 0 containers: []
	W1216 06:20:40.182697    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:20:40.185696    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:20:40.214698    8452 logs.go:282] 0 containers: []
	W1216 06:20:40.214698    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:20:40.218698    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:20:40.256490    8452 logs.go:282] 0 containers: []
	W1216 06:20:40.257483    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:20:40.260482    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:20:40.290492    8452 logs.go:282] 0 containers: []
	W1216 06:20:40.290492    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:20:40.293485    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:20:40.323484    8452 logs.go:282] 0 containers: []
	W1216 06:20:40.323484    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:20:40.326483    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:20:40.360481    8452 logs.go:282] 0 containers: []
	W1216 06:20:40.360481    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:20:40.363496    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:20:40.396503    8452 logs.go:282] 0 containers: []
	W1216 06:20:40.396503    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:20:40.400489    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:20:40.452123    8452 logs.go:282] 0 containers: []
	W1216 06:20:40.452123    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:20:40.452123    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:20:40.452123    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:20:40.488122    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:20:40.488122    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:20:40.588588    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:20:40.578485    3790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:40.579386    3790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:40.581725    3790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:40.582800    3790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:40.583871    3790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:20:40.578485    3790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:40.579386    3790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:40.581725    3790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:40.582800    3790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:40.583871    3790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:20:40.588588    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:20:40.588588    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:20:40.615676    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:20:40.615676    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:20:40.671487    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:20:40.671487    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:20:43.240651    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:43.260358    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:20:43.289579    8452 logs.go:282] 0 containers: []
	W1216 06:20:43.289579    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:20:43.292589    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:20:43.321578    8452 logs.go:282] 0 containers: []
	W1216 06:20:43.321578    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:20:43.325591    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:20:43.358593    8452 logs.go:282] 0 containers: []
	W1216 06:20:43.358593    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:20:43.362592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:20:43.397585    8452 logs.go:282] 0 containers: []
	W1216 06:20:43.397585    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:20:43.401599    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:20:43.440343    8452 logs.go:282] 0 containers: []
	W1216 06:20:43.440343    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:20:43.445291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:20:43.478848    8452 logs.go:282] 0 containers: []
	W1216 06:20:43.478848    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:20:43.481846    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:20:43.512844    8452 logs.go:282] 0 containers: []
	W1216 06:20:43.512844    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:20:43.516843    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:20:43.548881    8452 logs.go:282] 0 containers: []
	W1216 06:20:43.548881    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:20:43.548881    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:20:43.548881    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:20:43.612847    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:20:43.612847    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:20:43.653297    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:20:43.653837    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:20:43.760206    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:20:43.752385    3962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:43.754137    3962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:43.755468    3962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:43.756582    3962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:43.757935    3962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:20:43.752385    3962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:43.754137    3962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:43.755468    3962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:43.756582    3962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:43.757935    3962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:20:43.760206    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:20:43.760206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:20:43.786205    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:20:43.786205    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:20:46.343725    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:46.367234    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:20:46.399718    8452 logs.go:282] 0 containers: []
	W1216 06:20:46.399764    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:20:46.404746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:20:46.434959    8452 logs.go:282] 0 containers: []
	W1216 06:20:46.435001    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:20:46.438821    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:20:46.472043    8452 logs.go:282] 0 containers: []
	W1216 06:20:46.472043    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:20:46.476317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:20:46.504774    8452 logs.go:282] 0 containers: []
	W1216 06:20:46.504774    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:20:46.508470    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:20:46.539413    8452 logs.go:282] 0 containers: []
	W1216 06:20:46.539413    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:20:46.543432    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:20:46.572671    8452 logs.go:282] 0 containers: []
	W1216 06:20:46.572671    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:20:46.576187    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:20:46.609507    8452 logs.go:282] 0 containers: []
	W1216 06:20:46.609507    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:20:46.613855    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:20:46.644733    8452 logs.go:282] 0 containers: []
	W1216 06:20:46.644733    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:20:46.644733    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:20:46.644733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:20:46.676458    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:20:46.676458    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:20:46.744210    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:20:46.744745    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:20:46.821693    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:20:46.821771    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:20:46.862209    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:20:46.862209    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:20:46.972296    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:20:46.955948    4148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:46.957839    4148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:46.960146    4148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:46.963137    4148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:46.964000    4148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:20:46.955948    4148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:46.957839    4148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:46.960146    4148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:46.963137    4148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:46.964000    4148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:20:49.477283    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:49.501414    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:20:49.547219    8452 logs.go:282] 0 containers: []
	W1216 06:20:49.547219    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:20:49.551153    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:20:49.587988    8452 logs.go:282] 0 containers: []
	W1216 06:20:49.587988    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:20:49.592987    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:20:49.626989    8452 logs.go:282] 0 containers: []
	W1216 06:20:49.626989    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:20:49.629996    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:20:49.659481    8452 logs.go:282] 0 containers: []
	W1216 06:20:49.659481    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:20:49.662478    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:20:49.692011    8452 logs.go:282] 0 containers: []
	W1216 06:20:49.692011    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:20:49.696266    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:20:49.731614    8452 logs.go:282] 0 containers: []
	W1216 06:20:49.731614    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:20:49.735408    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:20:49.765961    8452 logs.go:282] 0 containers: []
	W1216 06:20:49.765961    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:20:49.770826    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:20:49.798620    8452 logs.go:282] 0 containers: []
	W1216 06:20:49.798620    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:20:49.798620    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:20:49.798620    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:20:49.833942    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:20:49.833942    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:20:49.949967    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:20:49.936759    4300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:49.937696    4300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:49.939669    4300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:49.940684    4300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:49.942904    4300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:20:49.936759    4300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:49.937696    4300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:49.939669    4300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:49.940684    4300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:49.942904    4300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:20:49.949967    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:20:49.949967    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:20:49.983687    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:20:49.983687    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:20:50.032033    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:20:50.032033    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:20:52.629819    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:52.682343    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:20:52.716617    8452 logs.go:282] 0 containers: []
	W1216 06:20:52.716617    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:20:52.720675    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:20:52.754813    8452 logs.go:282] 0 containers: []
	W1216 06:20:52.754813    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:20:52.759283    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:20:52.804585    8452 logs.go:282] 0 containers: []
	W1216 06:20:52.804664    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:20:52.808016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:20:52.847355    8452 logs.go:282] 0 containers: []
	W1216 06:20:52.847355    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:20:52.851558    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:20:52.885528    8452 logs.go:282] 0 containers: []
	W1216 06:20:52.885528    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:20:52.889532    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:20:52.917156    8452 logs.go:282] 0 containers: []
	W1216 06:20:52.917248    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:20:52.920823    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:20:52.952570    8452 logs.go:282] 0 containers: []
	W1216 06:20:52.952570    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:20:52.956492    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:20:52.985422    8452 logs.go:282] 0 containers: []
	W1216 06:20:52.985422    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:20:52.985422    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:20:52.985422    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:20:53.060290    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:20:53.060290    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:20:53.096241    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:20:53.096241    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:20:53.196840    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:20:53.185088    4481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:53.186190    4481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:53.187637    4481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:53.190543    4481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:53.191606    4481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:20:53.185088    4481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:53.186190    4481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:53.187637    4481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:53.190543    4481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:53.191606    4481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:20:53.196840    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:20:53.196840    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:20:53.221829    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:20:53.221829    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:20:55.773160    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:55.794168    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:20:55.823160    8452 logs.go:282] 0 containers: []
	W1216 06:20:55.823160    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:20:55.826148    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:20:55.859566    8452 logs.go:282] 0 containers: []
	W1216 06:20:55.859566    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:20:55.864203    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:20:55.900257    8452 logs.go:282] 0 containers: []
	W1216 06:20:55.900319    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:20:55.903676    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:20:55.936874    8452 logs.go:282] 0 containers: []
	W1216 06:20:55.936874    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:20:55.940878    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:20:55.970872    8452 logs.go:282] 0 containers: []
	W1216 06:20:55.970872    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:20:55.973877    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:20:56.006877    8452 logs.go:282] 0 containers: []
	W1216 06:20:56.006877    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:20:56.012878    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:20:56.045871    8452 logs.go:282] 0 containers: []
	W1216 06:20:56.046891    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:20:56.050902    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:20:56.084638    8452 logs.go:282] 0 containers: []
	W1216 06:20:56.084638    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:20:56.084638    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:20:56.084638    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:20:56.187627    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:20:56.174327    4639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:56.175949    4639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:56.179212    4639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:56.180434    4639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:56.181761    4639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:20:56.174327    4639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:56.175949    4639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:56.179212    4639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:56.180434    4639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:56.181761    4639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:20:56.187627    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:20:56.187627    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:20:56.213484    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:20:56.213484    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:20:56.260917    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:20:56.260917    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:20:56.321236    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:20:56.321236    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:20:58.862473    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:20:58.886667    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:20:58.915703    8452 logs.go:282] 0 containers: []
	W1216 06:20:58.915703    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:20:58.919713    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:20:58.949803    8452 logs.go:282] 0 containers: []
	W1216 06:20:58.949803    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:20:58.953312    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:20:58.984483    8452 logs.go:282] 0 containers: []
	W1216 06:20:58.984570    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:20:58.988576    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:20:59.020074    8452 logs.go:282] 0 containers: []
	W1216 06:20:59.020108    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:20:59.023342    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:20:59.057524    8452 logs.go:282] 0 containers: []
	W1216 06:20:59.057524    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:20:59.064779    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:20:59.104329    8452 logs.go:282] 0 containers: []
	W1216 06:20:59.104329    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:20:59.109029    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:20:59.147179    8452 logs.go:282] 0 containers: []
	W1216 06:20:59.147179    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:20:59.152509    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:20:59.183248    8452 logs.go:282] 0 containers: []
	W1216 06:20:59.183248    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:20:59.183248    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:20:59.183248    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:20:59.212231    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:20:59.212231    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:20:59.257094    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:20:59.257094    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:20:59.316934    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:20:59.316934    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:20:59.354480    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:20:59.354480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:20:59.442899    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:20:59.429229    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:59.430593    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:59.432897    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:59.435044    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:59.437092    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:20:59.429229    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:59.430593    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:59.432897    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:59.435044    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:20:59.437092    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:01.946037    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:01.970652    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:02.006442    8452 logs.go:282] 0 containers: []
	W1216 06:21:02.006442    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:02.010421    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:02.039188    8452 logs.go:282] 0 containers: []
	W1216 06:21:02.039242    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:02.043243    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:02.076009    8452 logs.go:282] 0 containers: []
	W1216 06:21:02.076009    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:02.080637    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:02.111587    8452 logs.go:282] 0 containers: []
	W1216 06:21:02.111587    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:02.116389    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:02.152333    8452 logs.go:282] 0 containers: []
	W1216 06:21:02.152333    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:02.158971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:02.193010    8452 logs.go:282] 0 containers: []
	W1216 06:21:02.193112    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:02.196925    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:02.228095    8452 logs.go:282] 0 containers: []
	W1216 06:21:02.228095    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:02.232778    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:02.262714    8452 logs.go:282] 0 containers: []
	W1216 06:21:02.262792    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:02.262792    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:02.262792    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:02.327371    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:02.327371    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:02.364721    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:02.364721    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:02.458107    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:02.446201    4987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:02.447343    4987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:02.448511    4987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:02.449685    4987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:02.450891    4987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:02.446201    4987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:02.447343    4987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:02.448511    4987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:02.449685    4987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:02.450891    4987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:02.458107    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:02.458107    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:02.485656    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:02.485656    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:05.038618    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:05.062095    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:05.093541    8452 logs.go:282] 0 containers: []
	W1216 06:21:05.093541    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:05.097534    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:05.126551    8452 logs.go:282] 0 containers: []
	W1216 06:21:05.126551    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:05.129543    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:05.158410    8452 logs.go:282] 0 containers: []
	W1216 06:21:05.158410    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:05.162597    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:05.190018    8452 logs.go:282] 0 containers: []
	W1216 06:21:05.190018    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:05.194017    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:05.224025    8452 logs.go:282] 0 containers: []
	W1216 06:21:05.224025    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:05.227017    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:05.256497    8452 logs.go:282] 0 containers: []
	W1216 06:21:05.256497    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:05.259497    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:05.294310    8452 logs.go:282] 0 containers: []
	W1216 06:21:05.294310    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:05.299934    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:05.342613    8452 logs.go:282] 0 containers: []
	W1216 06:21:05.342613    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:05.342613    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:05.342613    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:05.371620    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:05.371620    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:05.423619    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:05.423619    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:05.482703    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:05.482703    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:05.522275    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:05.522275    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:05.630635    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:05.614929    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:05.617255    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:05.618025    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:05.620701    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:05.622059    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:05.614929    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:05.617255    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:05.618025    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:05.620701    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:05.622059    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:08.103153    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 06:21:08.136170    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1216 06:21:08.204153    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:21:08.204153    8452 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:21:08.208171    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:08.240827    8452 logs.go:282] 0 containers: []
	W1216 06:21:08.240827    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:08.243822    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:08.280825    8452 logs.go:282] 0 containers: []
	W1216 06:21:08.280825    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:08.283818    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:08.317206    8452 logs.go:282] 0 containers: []
	W1216 06:21:08.317206    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:08.321207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:08.351220    8452 logs.go:282] 0 containers: []
	W1216 06:21:08.351220    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:08.354216    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:08.385544    8452 logs.go:282] 0 containers: []
	W1216 06:21:08.386542    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:08.389544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:08.418549    8452 logs.go:282] 0 containers: []
	W1216 06:21:08.418549    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:08.421544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:08.450544    8452 logs.go:282] 0 containers: []
	W1216 06:21:08.450544    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:08.454544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:08.491622    8452 logs.go:282] 0 containers: []
	W1216 06:21:08.491622    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:08.491622    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:08.491704    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:08.610962    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:08.610962    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:08.649975    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:08.649975    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:08.753565    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:08.742189    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:08.743439    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:08.744225    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:08.747027    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:08.748095    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:08.742189    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:08.743439    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:08.744225    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:08.747027    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:08.748095    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:08.753565    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:08.753565    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:08.781572    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:08.781572    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:11.341857    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:11.359868    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:11.390868    8452 logs.go:282] 0 containers: []
	W1216 06:21:11.390868    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:11.394855    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:11.425868    8452 logs.go:282] 0 containers: []
	W1216 06:21:11.425868    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:11.429856    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:11.464860    8452 logs.go:282] 0 containers: []
	W1216 06:21:11.464860    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:11.469862    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:11.510130    8452 logs.go:282] 0 containers: []
	W1216 06:21:11.510130    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:11.516201    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:11.554322    8452 logs.go:282] 0 containers: []
	W1216 06:21:11.554322    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:11.559324    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:11.598367    8452 logs.go:282] 0 containers: []
	W1216 06:21:11.598367    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:11.602361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:11.632302    8452 logs.go:282] 0 containers: []
	W1216 06:21:11.632302    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:11.637422    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:11.665058    8452 logs.go:282] 0 containers: []
	W1216 06:21:11.665058    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:11.665111    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:11.665111    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:11.710328    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:11.710328    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:11.810429    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:11.801807    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:11.803119    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:11.804152    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:11.805341    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:11.806329    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:11.801807    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:11.803119    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:11.804152    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:11.805341    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:11.806329    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:11.810429    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:11.810429    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:11.841424    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:11.841424    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:11.908267    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:11.908267    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:14.484548    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:14.510520    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:14.540444    8452 logs.go:282] 0 containers: []
	W1216 06:21:14.540444    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:14.544129    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:14.581633    8452 logs.go:282] 0 containers: []
	W1216 06:21:14.581633    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:14.585969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:14.615682    8452 logs.go:282] 0 containers: []
	W1216 06:21:14.615682    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:14.622160    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:14.661210    8452 logs.go:282] 0 containers: []
	W1216 06:21:14.661210    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:14.668985    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:14.707355    8452 logs.go:282] 0 containers: []
	W1216 06:21:14.707388    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:14.716778    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:14.760777    8452 logs.go:282] 0 containers: []
	W1216 06:21:14.760777    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:14.766478    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:14.800057    8452 logs.go:282] 0 containers: []
	W1216 06:21:14.800057    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:14.805048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:14.839034    8452 logs.go:282] 0 containers: []
	W1216 06:21:14.839034    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:14.839034    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:14.839034    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:14.879041    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:14.879041    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:14.986455    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:14.975149    5682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:14.976085    5682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:14.978292    5682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:14.980526    5682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:14.981419    5682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:14.975149    5682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:14.976085    5682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:14.978292    5682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:14.980526    5682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:14.981419    5682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:14.986455    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:14.987453    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:15.021633    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:15.021633    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:15.070551    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:15.070632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:17.654781    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:17.676791    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:17.707375    8452 logs.go:282] 0 containers: []
	W1216 06:21:17.707375    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:17.712978    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:17.750898    8452 logs.go:282] 0 containers: []
	W1216 06:21:17.750979    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:17.754876    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:17.785625    8452 logs.go:282] 0 containers: []
	W1216 06:21:17.785625    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:17.788624    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:17.822834    8452 logs.go:282] 0 containers: []
	W1216 06:21:17.822879    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:17.826757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:17.865604    8452 logs.go:282] 0 containers: []
	W1216 06:21:17.865604    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:17.871406    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:17.905436    8452 logs.go:282] 0 containers: []
	W1216 06:21:17.905436    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:17.914682    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:17.945264    8452 logs.go:282] 0 containers: []
	W1216 06:21:17.945264    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:17.948258    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:17.986322    8452 logs.go:282] 0 containers: []
	W1216 06:21:17.986322    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:17.986322    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:17.986322    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:18.027287    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:18.027287    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:18.084619    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:18.084619    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:18.163274    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:18.163274    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:18.199295    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:18.199295    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:18.291576    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:18.281427    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:18.282696    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:18.283794    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:18.285171    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:18.286148    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:18.281427    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:18.282696    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:18.283794    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:18.285171    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:18.286148    5869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:20.796930    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:20.818521    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:20.850500    8452 logs.go:282] 0 containers: []
	W1216 06:21:20.850500    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:20.854496    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:20.891503    8452 logs.go:282] 0 containers: []
	W1216 06:21:20.891503    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:20.895510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:20.938259    8452 logs.go:282] 0 containers: []
	W1216 06:21:20.938259    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:20.942274    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:20.968269    8452 logs.go:282] 0 containers: []
	W1216 06:21:20.968269    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:20.971263    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:21.002258    8452 logs.go:282] 0 containers: []
	W1216 06:21:21.002258    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:21.005259    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:21.035803    8452 logs.go:282] 0 containers: []
	W1216 06:21:21.035803    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:21.038794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:21.067809    8452 logs.go:282] 0 containers: []
	W1216 06:21:21.067809    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:21.070788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:21.096788    8452 logs.go:282] 0 containers: []
	W1216 06:21:21.096788    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:21.096788    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:21.096788    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:21.159855    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:21.159855    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:21.194858    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:21.194858    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:21.282870    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:21.272214    6024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:21.273851    6024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:21.274837    6024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:21.277273    6024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:21.278276    6024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:21.272214    6024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:21.273851    6024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:21.274837    6024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:21.277273    6024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:21.278276    6024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:21.282870    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:21.282870    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:21.307870    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:21.307870    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:22.181668    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1216 06:21:22.269891    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:21:22.269891    8452 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:21:23.867065    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:23.886073    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:23.919244    8452 logs.go:282] 0 containers: []
	W1216 06:21:23.919244    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:23.923035    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:23.955233    8452 logs.go:282] 0 containers: []
	W1216 06:21:23.955233    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:23.958727    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:23.993199    8452 logs.go:282] 0 containers: []
	W1216 06:21:23.993199    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:23.998092    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:24.026083    8452 logs.go:282] 0 containers: []
	W1216 06:21:24.026083    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:24.029088    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:24.059085    8452 logs.go:282] 0 containers: []
	W1216 06:21:24.059085    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:24.062095    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:24.093088    8452 logs.go:282] 0 containers: []
	W1216 06:21:24.093088    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:24.097090    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:24.130603    8452 logs.go:282] 0 containers: []
	W1216 06:21:24.130603    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:24.133586    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:24.162596    8452 logs.go:282] 0 containers: []
	W1216 06:21:24.162596    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:24.162596    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:24.162596    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:24.221589    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:24.221589    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:24.256592    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:24.256592    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:24.343938    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:24.333559    6206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:24.334848    6206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:24.335933    6206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:24.336922    6206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:24.339081    6206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:24.333559    6206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:24.334848    6206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:24.335933    6206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:24.336922    6206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:24.339081    6206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:24.343938    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:24.343938    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:24.370061    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:24.370061    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:26.923615    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:26.945425    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:26.975418    8452 logs.go:282] 0 containers: []
	W1216 06:21:26.975418    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:26.979420    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:27.010304    8452 logs.go:282] 0 containers: []
	W1216 06:21:27.010304    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:27.014461    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:27.046540    8452 logs.go:282] 0 containers: []
	W1216 06:21:27.046540    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:27.050437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:27.081127    8452 logs.go:282] 0 containers: []
	W1216 06:21:27.081127    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:27.085131    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:27.113133    8452 logs.go:282] 0 containers: []
	W1216 06:21:27.113133    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:27.116128    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:27.147075    8452 logs.go:282] 0 containers: []
	W1216 06:21:27.147075    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:27.152067    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:27.181073    8452 logs.go:282] 0 containers: []
	W1216 06:21:27.181073    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:27.185075    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:27.221399    8452 logs.go:282] 0 containers: []
	W1216 06:21:27.221399    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:27.221399    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:27.221399    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:27.268969    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:27.268969    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:27.337621    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:27.337621    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:27.375387    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:27.375387    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:27.466293    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:27.456258    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:27.457663    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:27.458979    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:27.460248    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:27.461500    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:27.456258    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:27.457663    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:27.458979    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:27.460248    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:27.461500    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:27.466293    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:27.466293    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:27.765257    8452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1216 06:21:27.854626    8452 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1216 06:21:27.854923    8452 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1216 06:21:27.862053    8452 out.go:179] * Enabled addons: 
	I1216 06:21:27.866797    8452 addons.go:530] duration metric: took 1m54.3567245s for enable addons: enabled=[]
	I1216 06:21:30.055522    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:30.078529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:30.109519    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.109519    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:30.112511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:30.144765    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.144765    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:30.149313    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:30.181852    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.181852    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:30.184838    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:30.217464    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.217504    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:30.221332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:30.249301    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.249301    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:30.252441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:30.278998    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.278998    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:30.281997    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:30.312414    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.312414    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:30.318242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:30.357304    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.357361    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:30.357422    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:30.357422    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:30.746929    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:30.746929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:30.795665    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:30.795746    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:30.878691    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:30.878691    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:30.913679    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:30.913679    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:31.010602    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:33.516463    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:33.569431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:33.607164    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.607164    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:33.610177    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:33.645795    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.645795    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:33.649795    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:33.678786    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.678786    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:33.681789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:33.712090    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.712090    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:33.716091    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:33.749207    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.749207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:33.753072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:33.787281    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.787317    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:33.790849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:33.822451    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.822451    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:33.828957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:33.859827    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.859881    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:33.859881    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:33.859929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:33.922584    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:33.922584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:34.010640    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:34.010640    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:34.010640    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:34.047937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:34.047937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:34.108035    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:34.108113    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:36.686452    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:36.704466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:36.733394    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.733394    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:36.737348    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:36.773510    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.773510    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:36.776509    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:36.805498    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.805498    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:36.809501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:36.845499    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.845499    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:36.849511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:36.879646    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.879646    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:36.884108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:36.915408    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.915408    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:36.920279    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:36.952754    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.952754    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:36.955734    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:36.987884    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.987884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:36.987884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:36.987884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:37.053646    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:37.053646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:37.093881    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:37.093881    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:37.179527    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:37.179584    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:37.179628    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:37.206261    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:37.206297    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:39.778747    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:39.802598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:39.832179    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.832179    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:39.836704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:39.869121    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.869121    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:39.873774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:39.909668    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.909668    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:39.914691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:39.947830    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.947830    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:39.951594    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:39.982557    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.982557    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:39.986642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:40.018169    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.018169    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:40.021165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:40.051243    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.051243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:40.057090    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:40.084414    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.084414    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:40.084414    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:40.084414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:40.144414    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:40.144414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:40.179632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:40.179632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:40.269800    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:40.270323    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:40.270323    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:40.298399    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:40.298399    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:42.860131    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:42.883733    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:42.914204    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.914204    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:42.918228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:42.950380    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.950459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:42.954335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:42.985562    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.985562    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:42.988741    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:43.016773    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.016773    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:43.020957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:43.059042    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.059042    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:43.062546    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:43.091529    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.091529    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:43.095547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:43.122188    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.122188    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:43.126264    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:43.154603    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.154667    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:43.154687    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:43.154709    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:43.217437    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:43.217437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:43.256550    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:43.256550    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:43.334672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:43.334672    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:43.334672    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:43.363324    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:43.363324    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:45.929428    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:45.950755    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:45.982605    8452 logs.go:282] 0 containers: []
	W1216 06:21:45.982605    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:45.987711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:46.020649    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.020649    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:46.024931    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:46.058836    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.058836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:46.066651    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:46.094860    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.094860    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:46.098689    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:46.127246    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.127246    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:46.130937    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:46.159519    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.159519    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:46.163609    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:46.195483    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.195483    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:46.199178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:46.229349    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.229349    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:46.229349    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:46.229349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:46.292392    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:46.292392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:46.330903    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:46.330903    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:46.427283    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:46.427283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:46.427283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:46.458647    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:46.458647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:49.005293    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.027308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:49.061499    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.061499    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:49.064510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:49.094110    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.094110    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:49.097105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:49.126749    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.126749    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:49.132748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:49.169344    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.169344    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:49.174332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:49.209103    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.209103    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:49.214581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:49.248644    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.248644    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:49.251643    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:49.281632    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.281632    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:49.285645    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:49.316955    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.316955    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:49.316955    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:49.317942    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:49.391656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:49.391738    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:49.432724    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:49.432724    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:49.523989    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:49.523989    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:49.523989    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:49.552004    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:49.552004    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:52.109148    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:52.140748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:52.186855    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.186855    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:52.193111    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:52.227511    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.227511    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:52.232508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:52.265331    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.265331    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:52.270635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:52.301130    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.301130    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:52.307669    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:52.342623    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.342623    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:52.347794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:52.387246    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.387246    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:52.392629    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:52.445055    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.445143    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:52.449252    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:52.480071    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.480071    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:52.480071    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:52.480071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:52.547115    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:52.547115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:52.590630    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:52.590630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:52.690412    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:52.690412    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:52.690412    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:52.718573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:52.718573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:55.273773    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:55.297441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:55.334351    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.334404    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:55.338338    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:55.372344    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.372344    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:55.375335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:55.429711    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.429711    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:55.432707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:55.463415    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.463415    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:55.466882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:55.495871    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.495871    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:55.499782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:55.530135    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.530135    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:55.534032    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:55.561956    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.561956    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:55.567456    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:55.598684    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.598684    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:55.598684    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:55.598684    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:55.661553    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:55.661553    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:55.699330    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:55.699330    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:55.806271    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:55.806271    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:55.806271    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:55.839937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:55.839937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.400590    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:58.422580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:58.453081    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.453081    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:58.457176    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:58.482739    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.482739    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:58.486288    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:58.516198    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.516198    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:58.520374    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:58.550134    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.550134    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:58.553679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:58.585815    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.585815    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:58.589532    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:58.620180    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.620180    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:58.626021    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:58.656946    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.656946    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:58.659942    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:58.687618    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.687618    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:58.687618    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:58.687618    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:58.777493    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:58.777493    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:58.777493    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:58.805676    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:58.805676    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.860391    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:58.860391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:58.925444    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:58.925444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.467725    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:01.493737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:01.526377    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.526426    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:01.530527    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:01.563582    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.563582    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:01.567007    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:01.606460    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.606460    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:01.610155    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:01.647965    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.647965    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:01.652309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:01.691011    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.691011    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:01.695466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:01.736509    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.736509    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:01.739991    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:01.773642    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.773642    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:01.777617    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:01.811025    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.811141    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:01.811141    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:01.811141    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:01.881533    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:01.881533    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.919632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:01.919632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:02.020491    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:02.020538    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:02.020587    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:02.059933    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:02.060031    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:04.620893    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:04.645761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:04.680596    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.680596    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:04.684611    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:04.712607    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.712607    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:04.716737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:04.744218    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.744218    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:04.748501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:04.787600    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.787668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:04.791349    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:04.825500    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.825547    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:04.829098    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:04.878465    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.878465    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:04.881466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:04.910168    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.910168    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:04.914167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:04.949752    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.949810    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:04.949810    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:04.949873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:04.987656    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:04.987703    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:05.093013    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:05.093013    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:05.093013    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:05.148503    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:05.148503    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:05.222357    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:05.222357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:07.791130    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:07.816699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:07.846890    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.846890    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:07.850551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:07.885179    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.885179    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:07.889622    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:07.920925    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.920925    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:07.925517    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:07.955043    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.955043    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:07.959825    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:07.988928    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.988928    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:07.993735    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:08.025335    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.025335    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:08.031801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:08.063231    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.063231    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:08.068525    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:08.106217    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.106217    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:08.106217    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:08.106217    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:08.173411    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:08.173411    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:08.241764    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:08.241764    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:08.282741    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:08.282741    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:08.376141    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:08.376181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:08.376246    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:10.906574    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:10.929977    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:10.963006    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.963006    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:10.966334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:10.995517    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.995517    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:10.998887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:11.027737    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.027771    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:11.034529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:11.070221    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.070221    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:11.075447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:11.105575    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.105575    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:11.108569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:11.143549    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.143549    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:11.146562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:11.178034    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.178034    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:11.181411    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:11.211522    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.211522    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:11.211522    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:11.211522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:11.244289    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:11.244289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:11.295870    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:11.295870    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:11.359418    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:11.360418    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:11.394416    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:11.394416    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:11.489247    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:13.994214    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:14.016691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:14.049641    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.049641    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:14.053607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:14.088893    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.088893    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:14.092847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:14.131857    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.131857    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:14.135845    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:14.168503    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.168503    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:14.172477    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:14.200948    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.200948    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:14.204642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:14.234975    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.234975    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:14.238802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:14.274052    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.274107    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:14.277642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:14.306199    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.306199    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:14.306199    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:14.306199    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:14.374972    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:14.374972    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:14.411356    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:14.411356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:14.498252    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:14.498283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:14.498283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:14.528112    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:14.528112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:17.081041    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:17.103056    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:17.137059    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.137059    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:17.141064    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:17.172640    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.172640    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:17.176638    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:17.210910    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.210910    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:17.215347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:17.248986    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.248986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:17.252989    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:17.287415    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.287415    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:17.293572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:17.324098    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.324098    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:17.330062    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:17.366512    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.366512    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:17.370101    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:17.402400    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.402400    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:17.402400    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:17.402400    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:17.455027    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:17.455027    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:17.513029    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:17.513029    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:17.548022    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:17.548022    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:17.645629    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:17.645629    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:17.645629    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:20.178315    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:20.202308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:20.231344    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.231344    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:20.236317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:20.279459    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.279459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:20.283465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:20.322463    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.322463    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:20.327465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:20.366466    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.366466    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:20.371478    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:20.409468    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.409468    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:20.413471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:20.447432    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.447432    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:20.451099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:20.486103    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.486103    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:20.490094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:20.530098    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.530098    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:20.530098    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:20.530098    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:20.557089    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:20.557089    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:20.606234    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:20.607239    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:20.667498    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:20.667498    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:20.703674    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:20.703674    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:20.796605    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.300916    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:23.324266    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:23.355598    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.355598    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:23.359141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:23.390554    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.390644    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:23.394340    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:23.423019    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.423019    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:23.426772    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:23.456953    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.457021    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:23.460762    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:23.491477    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.491477    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:23.495183    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:23.527107    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.527107    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:23.531577    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:23.559306    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.559306    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:23.563381    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:23.592615    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.592615    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:23.592615    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:23.592615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:23.630103    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:23.630103    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:23.719384    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.719514    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:23.719546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:23.746097    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:23.746097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:23.807727    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:23.807727    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:26.382913    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:26.404112    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:26.436722    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.436722    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:26.440749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:26.470877    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.470877    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:26.474941    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:26.503887    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.503950    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:26.508216    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:26.538317    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.538317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:26.542754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:26.571126    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.571189    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:26.574883    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:26.604762    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.604762    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:26.608705    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:26.637404    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.637444    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:26.641214    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:26.669720    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.669720    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:26.669720    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:26.669720    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:26.707289    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:26.707289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:26.791357    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:26.791357    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:26.791357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:26.817227    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:26.817227    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:26.865832    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:26.865832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.436231    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:29.459817    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:29.493134    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.493186    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:29.497118    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:29.526722    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.526722    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:29.531481    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:29.561672    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.561718    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:29.566882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:29.595896    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.595947    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:29.599655    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:29.628575    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.628661    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:29.632644    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:29.660164    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.660164    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:29.663829    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:29.694413    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.694413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:29.698152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:29.725286    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.725286    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:29.725355    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:29.725355    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.787721    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:29.787721    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:29.828376    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:29.828376    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:29.916249    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:29.916249    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:29.916249    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:29.942276    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:29.942276    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:32.497361    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:32.517362    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:32.549841    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.549912    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:32.553592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:32.582070    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.582070    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:32.585068    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:32.612095    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.612095    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:32.615889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:32.644953    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.644953    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:32.649025    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:32.676348    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.676429    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:32.680134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:32.708040    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.708040    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:32.712034    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:32.745789    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.745789    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:32.752533    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:32.781449    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.781504    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:32.781504    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:32.781504    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:32.843135    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:32.843135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:32.881564    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:32.881564    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:32.982597    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:32.982597    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:32.982597    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:33.013212    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:33.013212    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:35.578218    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:35.601163    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:35.629786    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.629786    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:35.634440    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:35.663168    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.663168    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:35.667699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:35.699050    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.699050    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:35.703224    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:35.736149    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.736149    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:35.741542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:35.772450    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.772450    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:35.776692    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:35.804150    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.804150    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:35.808799    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:35.837871    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.837871    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:35.841100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:35.870769    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.870769    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:35.870769    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:35.870769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:35.934803    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:35.934803    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:35.973201    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:35.973201    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:36.070057    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:36.070057    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:36.070057    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:36.098690    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:36.098690    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:38.663786    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:38.688639    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:38.718646    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.718646    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:38.721640    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:38.751651    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.751651    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:38.754647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:38.784327    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.784327    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:38.788327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:38.815337    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.815337    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:38.818328    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:38.846331    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.846331    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:38.849339    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:38.880297    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.880297    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:38.884227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:38.917702    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.917702    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:38.920940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:38.964973    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.964973    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:38.964973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:38.964973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:38.999971    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:38.999971    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:39.102927    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:39.102927    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:39.102927    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:39.141934    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:39.141934    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:39.210081    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:39.210081    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:41.775031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:41.798710    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:41.831778    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.831778    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:41.835461    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:41.866411    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.866411    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:41.871544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:41.902486    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.902486    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:41.905907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:41.932887    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.932887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:41.935886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:41.965890    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.965890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:41.968887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:42.000893    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.000893    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:42.004876    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:42.043522    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.043591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:42.049149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:42.081678    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.081678    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:42.081678    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:42.081678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:42.140208    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:42.140208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:42.198197    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:42.198197    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:42.241586    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:42.241586    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:42.350617    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:42.350617    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:42.350617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:44.884303    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:44.902304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:44.933421    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.933421    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:44.938149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:44.974292    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.974334    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:44.977512    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:45.010620    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.010620    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:45.013618    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:45.047628    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.047628    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:45.050627    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:45.089756    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.089850    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:45.096356    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:45.137323    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.137323    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:45.141322    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:45.169330    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.170335    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:45.173321    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:45.202336    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.202336    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:45.202336    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:45.202336    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:45.227331    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:45.227331    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:45.275577    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:45.275630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:45.335206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:45.335206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:45.372222    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:45.372222    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:45.471935    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:47.976320    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:48.004505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:48.037430    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.037430    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:48.040437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:48.076428    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.076477    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:48.081194    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:48.118536    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.118536    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:48.124810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:48.153702    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.153702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:48.159558    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:48.187736    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.187736    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:48.192607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:48.225619    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.225619    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:48.229580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:48.260085    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.260085    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:48.263087    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:48.294313    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.294376    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:48.294376    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:48.294425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:48.345094    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:48.345094    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:48.423576    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:48.423576    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:48.459577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:48.459577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:48.548441    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:48.548441    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:48.548441    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:51.080561    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:51.104134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:51.132144    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.132144    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:51.136151    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:51.163962    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.163962    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:51.169361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:51.198404    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.198404    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:51.201253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:51.229899    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.229899    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:51.232895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:51.261881    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.261881    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:51.264887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:51.295306    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.295306    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:51.298763    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:51.331779    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.331850    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:51.337211    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:51.367502    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.367502    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:51.367502    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:51.367502    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:51.424226    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:51.424226    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:51.482475    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:51.482475    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:51.527426    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:51.527426    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:51.618444    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:51.618444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:51.618444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.148108    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:54.167190    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:54.198456    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.198456    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:54.202605    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:54.236901    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.236901    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:54.240906    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:54.272541    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.272541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:54.277008    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:54.312764    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.312764    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:54.317359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:54.347564    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.347564    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:54.350557    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:54.377557    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.377557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:54.381564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:54.411585    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.411585    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:54.415565    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:54.447567    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.447567    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:54.447567    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:54.447567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:54.483559    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:54.483559    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:54.589583    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:54.589583    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:54.589583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.617283    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:54.617349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:54.673906    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:54.673990    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:57.250472    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:57.271468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:57.303800    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.303800    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:57.306801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:57.338803    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.338803    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:57.341800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:57.369018    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.369018    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:57.372806    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:57.403510    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.403510    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:57.406808    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:57.440995    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.440995    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:57.444225    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:57.475612    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.475612    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:57.479607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:57.509842    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.509842    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:57.513186    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:57.545981    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.545981    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:57.545981    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:57.545981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:57.636635    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:57.636635    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:57.636635    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:57.662639    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:57.662639    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:57.720464    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:57.720464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:57.782460    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:57.782460    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.324364    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:00.344368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:00.375358    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.375358    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:00.378355    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:00.410368    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.410368    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:00.414359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:00.442364    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.442364    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:00.446359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:00.476371    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.476371    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:00.479359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:00.508323    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.508323    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:00.512431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:00.550611    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.550611    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:00.553606    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:00.586336    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.586336    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:00.590552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:00.624129    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.624129    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:00.624129    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:00.624129    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:00.685547    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:00.685547    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.737417    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:00.737417    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:00.858025    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:00.858025    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:00.858025    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:00.886607    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:00.886607    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:03.463847    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:03.826614    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:03.881622    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.881622    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:03.887610    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:03.936557    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.937539    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:03.941562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:03.979542    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.979542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:03.983550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:04.020535    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.020535    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:04.025547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:04.064541    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.064541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:04.068548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:04.101538    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.101538    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:04.104544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:04.141752    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.141752    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:04.146757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:04.182755    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.182755    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:04.182755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:04.182755    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:04.305758    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:04.305758    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:04.356425    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:04.356425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:04.487429    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:04.487429    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:04.487429    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:04.526318    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:04.526362    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.087022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:07.110346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:07.137790    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.137790    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:07.141786    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:07.174601    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.174601    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:07.179419    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:07.211656    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.211656    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:07.216897    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:07.250459    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.250459    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:07.254048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:07.282207    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.282207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:07.285851    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:07.313925    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.313925    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:07.317529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:07.348851    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.348851    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:07.353083    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:07.381401    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.381401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:07.381401    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:07.381401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:07.408641    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:07.408641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.450935    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:07.450935    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:07.512733    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:07.512733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:07.552522    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:07.552522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:07.649624    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.155054    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:10.178201    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:10.207068    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.207068    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:10.210473    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:10.239652    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.239652    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:10.242766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:10.274887    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.274887    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:10.278519    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:10.308294    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.308351    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:10.312209    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:10.342572    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.342572    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:10.346437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:10.375569    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.375630    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:10.378861    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:10.405446    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.405446    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:10.410730    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:10.441244    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.441244    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:10.441244    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:10.441244    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:10.502753    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:10.502753    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:10.540437    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:10.540437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:10.626853    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.626853    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:10.626853    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:10.654987    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:10.655058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.213336    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:13.237358    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:13.266636    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.266721    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:13.270023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:13.297369    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.297434    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:13.300782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:13.336039    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.336039    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:13.341919    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:13.370523    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.370523    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:13.374455    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:13.404606    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.404606    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:13.408542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:13.437373    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.437431    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:13.441106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:13.470738    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.470738    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:13.474495    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:13.502203    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.502262    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:13.502262    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:13.502293    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.552578    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:13.552578    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:13.617499    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:13.617499    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:13.660047    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:13.660047    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:13.747316    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:13.747316    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:13.747316    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.284216    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:16.307907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:16.344535    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.344535    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:16.347847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:16.379001    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.379021    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:16.382292    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:16.413093    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.413116    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:16.418012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:16.456763    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.456826    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:16.460621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:16.491671    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.491693    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:16.495352    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:16.527862    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.527862    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:16.534704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:16.564194    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.564243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:16.570369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:16.601444    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.601444    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:16.601444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:16.601444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.631785    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:16.631785    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:16.675190    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:16.675190    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:16.737700    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:16.737700    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:16.775092    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:16.775092    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:16.865026    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.370669    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:19.393524    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:19.423405    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.423513    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:19.427307    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:19.459137    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.459238    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:19.462635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:19.493542    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.493542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:19.497334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:19.526496    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.526496    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:19.529949    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:19.559120    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.559120    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:19.562460    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:19.591305    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.591305    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:19.595794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:19.625200    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.626193    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:19.629187    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:19.657201    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.657201    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:19.657270    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:19.657270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:19.722496    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:19.722496    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:19.761161    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:19.761161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:19.852755    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.853756    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:19.853756    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:19.880330    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:19.881280    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.458668    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:22.483505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:22.514647    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.514647    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:22.518193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:22.551494    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.551494    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:22.555268    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:22.586119    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.586119    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:22.590107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:22.621733    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.621733    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:22.624739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:22.651728    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.651728    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:22.655725    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:22.687826    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.687826    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:22.692217    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:22.727413    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.727413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:22.731318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:22.769477    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.769477    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:22.770462    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:22.770462    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:22.795455    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:22.795455    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.851473    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:22.851473    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:22.911454    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:22.912459    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:22.948112    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:22.948112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:23.039238    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:25.544174    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:25.571784    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:25.610368    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.610422    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:25.615377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:25.651080    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.651129    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:25.655234    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:25.695942    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.695942    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:25.700548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:25.727743    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.727743    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:25.730739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:25.765620    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.765650    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:25.769261    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:25.805072    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.805127    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:25.810318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:25.840307    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.840307    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:25.844490    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:25.888279    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.888279    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:25.888279    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:25.888279    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:25.964206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:25.964206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:26.003275    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:26.003275    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:26.111485    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:26.111485    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:26.111485    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:26.146819    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:26.146819    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:28.694382    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:28.716947    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:28.753062    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.753062    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:28.756810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:28.789692    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.789692    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:28.794681    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:28.823690    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.823690    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:28.827683    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:28.858686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.858686    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:28.861688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:28.891686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.891686    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:28.894684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:28.923683    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.923683    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:28.926684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:28.958314    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.958314    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:28.962325    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:28.991317    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.991317    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:28.991317    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:28.991317    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:29.039348    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:29.039348    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:29.103117    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:29.103117    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:29.148003    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:29.148003    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:29.240448    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:29.240448    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:29.240448    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:31.772923    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:31.796203    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:31.827485    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.827485    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:31.830572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:31.873718    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.873718    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:31.877445    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:31.926391    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.926391    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:31.929391    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:31.964572    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.964572    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:31.968096    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:32.003776    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.003776    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:32.007175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:32.046322    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.046322    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:32.049283    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:32.077299    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.077299    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:32.080289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:32.114717    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.114793    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:32.114793    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:32.114843    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:32.191987    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:32.191987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:32.237143    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:32.237143    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:32.331899    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:32.331899    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:32.331899    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:32.362021    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:32.362021    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:34.918825    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:34.945647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:34.976745    8452 logs.go:282] 0 containers: []
	W1216 06:23:34.976745    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:34.980636    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:35.012295    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.012295    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:35.015295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:35.047289    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.047289    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:35.050289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:35.081492    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.081492    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:35.085580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:35.121645    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.121645    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:35.126840    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:35.167976    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.167976    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:35.170966    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:35.201969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.201969    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:35.204969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:35.232969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.233980    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:35.233980    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:35.233980    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:35.292973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:35.292973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:35.327973    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:35.327973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:35.420114    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:35.420114    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:35.420114    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:35.451148    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:35.451148    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:38.010056    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:38.035506    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:38.071853    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.071853    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:38.075564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:38.106543    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.106543    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:38.109547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:38.143669    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.143669    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:38.152737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:38.191923    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.191923    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:38.195575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:38.225935    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.225935    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:38.228939    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:38.268550    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.268550    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:38.271759    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:38.304387    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.304421    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:38.307849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:38.341968    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.341968    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:38.341968    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:38.341968    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:38.404267    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:38.404267    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:38.443104    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:38.443104    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:38.551474    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:38.551474    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:38.551474    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:38.582843    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:38.582869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.141896    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:41.185331    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:41.218961    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.219548    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:41.223789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:41.252376    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.252376    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:41.255368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:41.285378    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.285378    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:41.288369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:41.318383    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.318383    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:41.321372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:41.349373    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.349373    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:41.353377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:41.390105    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.390105    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:41.393103    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:41.425109    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.425109    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:41.428107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:41.462594    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.462594    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:41.462594    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:41.462594    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:41.492096    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:41.492156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.553755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:41.553806    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:41.622329    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:41.622329    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:41.664016    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:41.664016    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:41.759009    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:44.265223    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:44.286309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:44.319583    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.319583    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:44.324575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:44.358046    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.358114    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:44.361895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:44.390541    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.390541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:44.395354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:44.433163    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.433163    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:44.436754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:44.470605    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.470605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:44.475856    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:44.504412    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.504484    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:44.508013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:44.540170    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.540170    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:44.545802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:44.574593    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.575118    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:44.575181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:44.575181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:44.609181    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:44.609231    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:44.663988    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:44.663988    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:44.737678    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:44.737678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:44.777530    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:44.777530    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:44.868751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:47.373432    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:47.674375    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:47.705067    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.705067    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:47.709193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:47.739921    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.739921    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:47.743656    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:47.771970    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.771970    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:47.776451    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:47.808633    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.808633    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:47.813124    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:47.856079    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.856079    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:47.859452    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:47.891897    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.891897    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:47.895769    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:47.926050    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.926050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:47.929679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:47.962571    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.962571    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:47.962571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:47.962571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:48.026367    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:48.026367    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:48.063580    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:48.063580    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:48.173751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:48.173792    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:48.173792    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:48.199403    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:48.199403    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:50.750699    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:50.774573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:50.804983    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.804983    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:50.808894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:50.838533    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.838533    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:50.842242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:50.873377    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.873377    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:50.877508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:50.907646    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.907646    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:50.912410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:50.943853    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.943853    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:50.950275    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:50.977570    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.977570    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:50.982575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:51.010211    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.010211    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:51.014545    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:51.048584    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.048584    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:51.048584    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:51.048584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:51.112725    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:51.112725    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:51.150854    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:51.150854    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:51.246494    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:51.246535    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:51.246535    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:51.274873    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:51.274873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:53.832981    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:53.857995    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:53.892159    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.892159    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:53.895775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:53.926160    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.926160    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:53.929408    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:53.956482    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.956552    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:53.959711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:53.989788    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.989788    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:53.993230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:54.022506    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.022506    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:54.025409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:54.054974    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.054974    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:54.059372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:54.088015    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.088015    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:54.092123    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:54.121961    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.121961    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:54.121961    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:54.121961    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:54.169232    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:54.169295    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:54.230158    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:54.231156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:54.267713    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:54.267713    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:54.368006    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:54.368006    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:54.368006    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:56.899723    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:56.923149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:56.957635    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.957635    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:56.961499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:56.988363    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.988363    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:56.992371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:57.021993    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.021993    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:57.025544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:57.055718    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.055718    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:57.060969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:57.092456    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.092523    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:57.096418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:57.125588    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.125588    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:57.129665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:57.160663    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.160663    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:57.164518    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:57.196231    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.196281    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:57.196281    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:57.196281    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:57.258973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:57.258973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:57.302939    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:57.302939    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:57.397577    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:57.397577    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:57.397577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:57.434801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:57.434801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:59.991022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:00.014170    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:00.046529    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.046529    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:00.050903    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:00.080796    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.080796    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:00.084418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:00.114858    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.114858    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:00.121404    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:00.152596    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.152596    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:00.156447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:00.183532    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.183648    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:00.187074    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:00.218971    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.218971    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:00.222929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:00.252086    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.252086    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:00.256309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:00.285884    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.285884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:00.285884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:00.285884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:00.364208    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:00.364208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:00.403464    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:00.403464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:00.495864    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:00.495864    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:00.495864    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:00.521592    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:00.521592    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:03.070724    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:03.093858    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:03.127112    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.127112    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:03.131265    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:03.161262    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.161262    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:03.165073    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:03.195882    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.195933    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:03.200488    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:03.230205    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.230205    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:03.234193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:03.263580    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.263629    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:03.267410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:03.297599    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.297652    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:03.300957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:03.329666    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.329720    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:03.333378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:03.365184    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.365236    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:03.365282    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:03.365282    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:03.428385    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:03.428385    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:03.465984    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:03.465984    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:03.557873    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:03.559101    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:03.559101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:03.586791    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:03.586791    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:06.142562    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:06.170227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:06.202672    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.202672    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:06.206691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:06.237624    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.237624    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:06.241559    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:06.267616    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.267616    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:06.271709    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:06.304567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.304567    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:06.308556    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:06.337567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.337567    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:06.344744    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:06.373520    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.373520    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:06.377184    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:06.411936    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.411936    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:06.415789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:06.447263    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.447263    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:06.447263    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:06.447263    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:06.509097    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:06.509097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:06.546188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:06.546188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:06.639923    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:06.639923    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:06.639923    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:06.666485    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:06.666519    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.221249    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:09.244788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:09.276490    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.276490    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:09.280706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:09.309520    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.309520    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:09.313105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:09.339092    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.339092    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:09.343484    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:09.369046    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.369046    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:09.373188    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:09.403810    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.403810    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:09.407108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:09.437156    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.437156    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:09.441754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:09.469752    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.469810    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:09.473378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:09.503754    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.503754    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:09.503754    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:09.503754    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:09.533645    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:09.533718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.587529    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:09.587529    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:09.647801    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:09.647801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:09.686577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:09.686577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:09.782674    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.288199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:12.313967    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:12.344043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.344043    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:12.348347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:12.378683    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.378683    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:12.382106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:12.411599    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.411599    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:12.415131    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:12.445826    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.445873    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:12.450940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:12.481043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.481078    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:12.484800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:12.512969    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.512990    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:12.515915    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:12.548151    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.548228    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:12.551706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:12.584039    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.584039    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:12.584039    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:12.584039    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:12.646680    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:12.646680    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:12.686545    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:12.686545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:12.804767    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.804767    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:12.804767    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:12.831866    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:12.831866    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:15.392415    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:15.416435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:15.445044    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.445044    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:15.449260    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:15.476688    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.476688    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:15.481012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:15.508866    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.508928    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:15.512662    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:15.541002    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.541002    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:15.545363    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:15.574947    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.574991    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:15.578407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:15.604751    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.604751    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:15.608699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:15.639261    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.639338    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:15.642317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:15.674404    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.674404    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:15.674404    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:15.674404    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:15.736218    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:15.736218    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:15.774188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:15.774188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:15.862546    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:15.862546    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:15.862546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:15.888115    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:15.888115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.441031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:18.465207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:18.495447    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.495481    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:18.498929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:18.528412    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.528476    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:18.531543    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:18.560175    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.560175    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:18.563996    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:18.592824    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.592894    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:18.596175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:18.623746    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.623746    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:18.627099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:18.652978    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.653013    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:18.656407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:18.683637    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.683686    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:18.687125    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:18.716903    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.716942    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:18.716964    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:18.716981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:18.743123    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:18.743675    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.794891    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:18.794891    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:18.858345    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:18.858345    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:18.894242    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:18.894242    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:18.979844    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:21.485585    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:21.510290    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:21.539823    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.539823    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:21.543159    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:21.575241    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.575241    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:21.579330    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:21.607389    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.607490    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:21.611023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:21.642332    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.642332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:21.645973    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:21.671339    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.671390    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:21.675048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:21.704483    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.704483    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:21.708499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:21.734944    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.735027    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:21.738688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:21.768890    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.768890    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:21.768987    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:21.768987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:21.800297    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:21.800344    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:21.854571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:21.854571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:21.921230    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:21.921230    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:21.961787    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:21.961787    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:22.060842    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.566957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:24.591909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:24.624010    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.624010    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:24.627550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:24.657938    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.657938    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:24.661917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:24.688848    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.688848    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:24.692388    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:24.722130    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.722165    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:24.725802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:24.754067    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.754134    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:24.757294    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:24.783522    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.783595    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:24.787022    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:24.818531    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.818531    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:24.822200    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:24.851316    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.851371    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:24.851391    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:24.851391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:24.940030    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.941511    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:24.941511    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:24.967127    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:24.967127    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:25.018271    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:25.018358    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:25.077769    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:25.077769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:27.621222    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:27.644179    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:27.675033    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.675033    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:27.678724    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:27.707945    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.707945    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:27.712443    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:27.740635    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.740635    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:27.744539    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:27.775332    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.775332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:27.779621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:27.807884    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.807884    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:27.812207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:27.843877    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.843877    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:27.850126    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:27.878365    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.878365    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:27.883323    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:27.911733    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.911733    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:27.911733    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:27.911733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:27.975085    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:27.975085    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:28.011926    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:28.011926    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:28.117959    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:28.117959    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:28.117959    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:28.146135    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:28.146135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:30.702904    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:30.732783    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:30.768726    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.768726    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:30.772432    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:30.804888    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.804888    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:30.809005    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:30.839403    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.839403    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:30.843668    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:30.874013    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.874013    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:30.878013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:30.906934    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.906934    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:30.911178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:30.936942    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.936942    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:30.940954    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:30.967843    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.967843    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:30.973798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:31.000614    8452 logs.go:282] 0 containers: []
	W1216 06:24:31.000614    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:31.000614    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:31.000614    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:31.063545    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:31.063545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:31.101704    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:31.101704    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:31.201356    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:31.201356    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:31.201356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:31.229634    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:31.229634    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:33.780745    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:33.805148    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:33.836319    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.836319    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:33.840094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:33.872138    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.872167    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:33.875487    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:33.908318    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.908318    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:33.912197    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:33.940179    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.940223    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:33.944152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:33.974912    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.974912    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:33.978728    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:34.004557    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.004557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:34.008971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:34.037591    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.037591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:34.041537    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:34.073153    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.073153    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:34.073153    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:34.073153    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:34.139585    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:34.139585    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:34.177888    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:34.177888    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:34.273589    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:34.273589    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:34.273589    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:34.298805    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:34.298805    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:36.851957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:36.889887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:36.919682    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.919682    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:36.923468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:36.953008    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.953073    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:36.957253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:36.985770    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.985770    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:36.989059    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:37.015702    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.015702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:37.019508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:37.046311    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.046351    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:37.050327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:37.087936    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.087936    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:37.092175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:37.121271    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.121271    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:37.125767    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:37.153753    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.153814    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:37.153814    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:37.153869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:37.218058    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:37.218058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:37.256162    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:37.257161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:37.349292    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:37.349292    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:37.349292    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:37.378861    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:37.379384    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:39.931797    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:39.956069    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:39.991154    8452 logs.go:282] 0 containers: []
	W1216 06:24:39.991154    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:39.994809    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:40.021488    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.021488    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:40.025604    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:40.055464    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.055464    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:40.059576    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:40.085410    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.086402    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:40.090048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:40.120389    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.120389    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:40.125766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:40.159925    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.159962    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:40.163912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:40.190820    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.190820    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:40.194350    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:40.223821    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.223886    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:40.223886    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:40.223886    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:40.292033    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:40.292033    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:40.331274    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:40.331274    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:40.423708    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:40.423708    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:40.423708    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:40.452101    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:40.452101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.005925    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:43.029165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:43.060601    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.060601    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:43.064304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:43.092446    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.092446    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:43.096552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:43.127295    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.127347    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:43.130913    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:43.159919    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.159986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:43.163049    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:43.190310    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.190384    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:43.194093    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:43.223641    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.223641    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:43.227270    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:43.254592    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.254592    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:43.259912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:43.293166    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.293166    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:43.293166    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:43.293166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:43.328685    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:43.328685    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:43.412970    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:43.413012    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:43.413042    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:43.444573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:43.444573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.501857    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:43.501857    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.068154    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:46.095291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:46.125740    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.125740    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:46.131016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:46.160926    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.160926    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:46.164909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:46.192634    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.192634    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:46.196346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:46.224203    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.224203    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:46.228650    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:46.255541    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.255541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:46.259732    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:46.289377    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.289377    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:46.293566    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:46.321342    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.321342    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:46.325492    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:46.352311    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.352342    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:46.352342    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:46.352382    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.416761    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:46.416761    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:46.469641    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:46.469641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:46.580672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:46.581191    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:46.581229    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:46.608166    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:46.608166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:49.162834    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:49.187402    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:49.219893    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.219893    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:49.223424    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:49.252338    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.252338    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:49.255900    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:49.286106    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.286131    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:49.289776    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:49.317141    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.317141    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:49.322761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:49.353605    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.353605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:49.357674    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:49.385747    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.385793    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:49.388757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:49.417812    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.417812    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:49.421500    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:49.452746    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.452797    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:49.452797    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:49.452797    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:49.516432    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:49.516432    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:49.553647    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:49.553647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:49.647049    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:49.647087    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:49.647087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:49.671889    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:49.671889    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:52.224199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:52.248067    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:52.282412    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.282412    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:52.286308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:52.315955    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.315955    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:52.319894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:52.353188    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.353188    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:52.356528    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:52.387579    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.387579    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:52.392336    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:52.421909    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.421909    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:52.425890    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:52.458902    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.458902    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:52.462430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:52.498067    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.498140    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:52.501354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:52.528125    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.528125    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:52.528125    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:52.528125    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:52.593845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:52.593845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:52.632779    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:52.632779    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:52.732902    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:52.732902    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:52.732902    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:52.762437    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:52.762437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.328400    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:55.355014    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:55.387364    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.387364    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:55.391085    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:55.417341    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.417341    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:55.421141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:55.450785    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.450785    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:55.454454    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:55.482484    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.482484    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:55.486100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:55.513682    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.513682    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:55.517291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:55.548548    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.548548    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:55.552971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:55.583380    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.583380    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:55.587471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:55.618619    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.618619    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:55.618619    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:55.618686    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:55.646962    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:55.646962    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.695480    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:55.695480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:55.757470    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:55.757470    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:55.796071    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:55.796071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:55.889833    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.396122    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:58.423573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:58.454757    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.454757    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:58.460430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:58.490597    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.490597    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:58.493832    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:58.523149    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.523149    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:58.526960    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:58.558649    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.558649    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:58.562228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:58.591400    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.591400    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:58.595569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:58.624162    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.624162    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:58.628070    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:58.660578    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.660578    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:58.664236    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:58.693155    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.693155    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:58.693155    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:58.693155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:58.732408    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:58.733409    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:58.823465    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.823465    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:58.823465    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:58.848772    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:58.848772    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:58.900567    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:58.900567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.465828    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:01.490385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:01.520316    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.520316    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:01.524299    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:01.555350    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.555350    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:01.559239    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:01.587077    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.587077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:01.591421    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:01.623853    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.623853    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:01.627746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:01.658165    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.658165    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:01.661588    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:01.703310    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.703310    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:01.709361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:01.740903    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.740903    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:01.744287    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:01.773431    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.773431    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:01.773431    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:01.773431    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:01.863541    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:01.863541    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:01.863541    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:01.891816    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:01.891816    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:01.936351    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:01.936351    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.997563    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:01.997563    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.541470    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:04.565886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:04.595881    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.595881    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:04.599716    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:04.629724    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.629749    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:04.633814    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:04.666020    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.666047    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:04.669510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:04.699730    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.699730    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:04.704016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:04.734540    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.734540    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:04.738414    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:04.765651    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.765651    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:04.769397    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:04.797315    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.797315    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:04.801409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:04.832845    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.832845    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:04.832845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:04.832845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.869617    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:04.869617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:04.958334    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:04.958334    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:04.958334    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:04.983497    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:04.983497    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:05.037861    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:05.037887    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.603239    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:07.626775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:07.655146    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.655146    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:07.658648    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:07.688192    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.688227    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:07.691749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:07.723836    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.723836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:07.727536    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:07.761238    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.761238    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:07.764987    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:07.792890    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.792890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:07.796847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:07.824734    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.824734    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:07.828821    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:07.859399    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.859399    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:07.862780    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:07.893406    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.893406    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:07.893457    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:07.893480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.954656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:07.954656    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:07.992200    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:07.993203    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:08.077979    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:08.077979    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:08.077979    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:08.102718    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:08.102718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:10.662101    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:10.688889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:10.721934    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.721996    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:10.727012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:10.760697    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.760746    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:10.763961    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:10.791222    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.791293    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:10.795121    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:10.826239    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.826317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:10.829753    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:10.857355    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.857355    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:10.861145    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:10.903922    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.903922    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:10.907990    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:10.937216    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.937281    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:10.940707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:10.969086    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.969086    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:10.969086    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:10.969238    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:11.062109    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:11.062109    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:11.062109    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:11.090185    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:11.090185    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:11.141444    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:11.141444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:11.199181    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:11.199181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:13.741347    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:13.766441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:13.800424    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.800424    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:13.805169    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:13.835040    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.835040    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:13.839295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:13.864861    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.866077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:13.869598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:13.898887    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.898887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:13.903167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:13.931208    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.931208    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:13.936649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:13.963722    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.963722    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:13.967474    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:13.998640    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.998640    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:14.002572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:14.031349    8452 logs.go:282] 0 containers: []
	W1216 06:25:14.031401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:14.031401    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:14.031401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:14.124587    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:14.124587    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:14.124714    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:14.153583    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:14.153583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:14.202636    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:14.202636    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:14.260591    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:14.260591    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:16.808603    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:16.833787    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:16.864300    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.864300    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:16.868592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:16.897549    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.897549    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:16.900917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:16.931516    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.931557    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:16.936698    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:16.965053    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.965053    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:16.969015    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:16.997017    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.997017    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:17.000551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:17.028733    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.028733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:17.032830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:17.062242    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.062242    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:17.066193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:17.096111    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.096186    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:17.096186    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:17.096243    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:17.126801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:17.126801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:17.178392    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:17.178392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:17.239223    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:17.239223    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:17.276363    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:17.277364    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:17.362910    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:19.869062    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:19.894371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:19.924915    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.924915    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:19.929351    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:19.956535    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.956535    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:19.960534    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:19.989334    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.989334    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:19.993202    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:20.021108    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.021108    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:20.025230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:20.054251    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.054251    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:20.057788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:20.088787    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.088860    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:20.092250    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:20.120577    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.120577    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:20.123857    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:20.153015    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.153015    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:20.153015    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:20.153015    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:20.241391    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:20.241391    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:20.241391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:20.267492    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:20.267554    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:20.321240    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:20.321880    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:20.384978    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:20.384978    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:22.926087    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:22.949774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:22.982854    8452 logs.go:282] 0 containers: []
	W1216 06:25:22.982854    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:22.986923    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:23.017638    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.017638    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:23.021130    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:23.052442    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.052667    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:23.058175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:23.085210    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.085210    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:23.089664    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:23.120747    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.120795    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:23.124581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:23.150600    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.150600    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:23.154602    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:23.182147    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.182147    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:23.185649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:23.217087    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.217087    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:23.217087    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:23.217087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:23.280619    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:23.280619    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:23.318090    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:23.318090    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:23.406270    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:23.406270    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:23.406270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:23.435128    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:23.435128    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:25.989934    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:26.012706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:26.043141    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.043141    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:26.047435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:26.075985    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.075985    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:26.079830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:26.110575    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.110575    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:26.113774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:26.144668    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.144668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:26.148428    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:26.175392    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.175392    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:26.179120    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:26.211067    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.211067    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:26.215072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:26.243555    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.243586    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:26.246934    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:26.279876    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.279876    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:26.279876    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:26.279876    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:26.387447    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:26.387488    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:26.387537    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:26.413896    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:26.413896    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:26.462318    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:26.462318    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:26.527832    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:26.527832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.072565    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:29.096390    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:29.127989    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.127989    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:29.131385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:29.158741    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.158741    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:29.162538    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:29.190346    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.190346    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:29.193798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:29.222234    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.222234    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:29.225740    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:29.252553    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.252553    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:29.256489    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:29.285679    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.285733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:29.289742    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:29.320841    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.321050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:29.324841    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:29.352461    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.352587    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:29.352615    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:29.352615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:29.419045    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:29.419045    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.457659    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:29.457659    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:29.544155    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:29.544155    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:29.544155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:29.571612    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:29.571646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:32.139910    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:32.164438    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:32.196526    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.196526    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:32.200231    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:32.226279    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.226279    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:32.230146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:32.257831    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.257831    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:32.262665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:32.293641    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.293641    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:32.297746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:32.327055    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.327055    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:32.331274    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:32.362206    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.362206    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:32.365146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:32.394600    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.394600    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:32.400058    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:32.428075    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.428075    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:32.428075    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:32.428075    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:32.491661    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:32.491661    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:32.528847    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:32.528847    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:32.616464    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:32.616464    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:32.616464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:32.642397    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:32.642397    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:35.191852    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:35.225285    8452 out.go:203] 
	W1216 06:25:35.227244    8452 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1216 06:25:35.227244    8452 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1216 06:25:35.227244    8452 out.go:285] * Related issues:
	* Related issues:
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1216 06:25:35.230096    8452 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p newest-cni-256200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-256200
helpers_test.go:244: (dbg) docker inspect newest-cni-256200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66",
	        "Created": "2025-12-16T06:09:14.512792797Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 436653,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:19:21.496573864Z",
	            "FinishedAt": "2025-12-16T06:19:16.313765237Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/hostname",
	        "HostsPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/hosts",
	        "LogPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66-json.log",
	        "Name": "/newest-cni-256200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-256200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-256200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-256200",
	                "Source": "/var/lib/docker/volumes/newest-cni-256200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-256200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-256200",
	                "name.minikube.sigs.k8s.io": "newest-cni-256200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e8e6d675d034626362ba9bfe3ff7eb692b71509157c5f340d1ebcb47d8e5bca3",
	            "SandboxKey": "/var/run/docker/netns/e8e6d675d034",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55872"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55868"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55869"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55870"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55871"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-256200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c97a08422fb6ea0a0f62c56d96c89be84aa4e33beba1ccaa82b7390e64b42c8e",
	                    "EndpointID": "fd51517b1d43bd1aa0aedcd49011763e39b0ec0911fbe06e3e82710415d585b2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-256200",
	                        "144d2cf5befb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200: exit status 2 (593.8753ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-256200 logs -n 25
E1216 06:25:39.033032   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:25:39.039870   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:25:39.052282   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:25:39.074002   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:25:39.116966   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:25:39.198801   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:25:39.360135   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-256200 logs -n 25: (1.4788541s)
E1216 06:25:39.681737   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-030800 sudo iptables -t nat -L -n -v                                 │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status kubelet --all --full --no-pager         │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat kubelet --no-pager                         │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo journalctl -xeu kubelet --all --full --no-pager          │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/kubernetes/kubelet.conf                         │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status docker --all --full --no-pager          │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat docker --no-pager                          │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/docker/daemon.json                              │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo docker system info                                       │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat cri-docker --no-pager                      │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cri-dockerd --version                                    │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status containerd --all --full --no-pager      │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat containerd --no-pager                      │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /lib/systemd/system/containerd.service               │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/containerd/config.toml                          │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo containerd config dump                                   │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status crio --all --full --no-pager            │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat crio --no-pager                            │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo crio config                                              │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete  │ -p kubenet-030800                                                               │ kubenet-030800 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:21:31
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:21:31.068463    4424 out.go:360] Setting OutFile to fd 1300 ...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.112163    4424 out.go:374] Setting ErrFile to fd 1224...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.126168    4424 out.go:368] Setting JSON to false
	I1216 06:21:31.128157    4424 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7112,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:21:31.129155    4424 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:21:31.133155    4424 out.go:179] * [kubenet-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:21:31.136368    4424 notify.go:221] Checking for updates...
	I1216 06:21:31.137751    4424 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:31.140914    4424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:21:31.143313    4424 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:21:31.145626    4424 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:21:31.147629    4424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:21:31.150478    4424 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151727    4424 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:21:31.272417    4424 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:21:31.275875    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.534539    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.516919297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.537553    4424 out.go:179] * Using the docker driver based on user configuration
	I1216 06:21:31.541211    4424 start.go:309] selected driver: docker
	I1216 06:21:31.541254    4424 start.go:927] validating driver "docker" against <nil>
	I1216 06:21:31.541286    4424 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:21:31.597589    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.842240    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.823958826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.842240    4424 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:21:31.843240    4424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:31.846236    4424 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:21:31.848222    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:21:31.848222    4424 start.go:353] cluster config:
	{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:21:31.851222    4424 out.go:179] * Starting "kubenet-030800" primary control-plane node in "kubenet-030800" cluster
	I1216 06:21:31.860233    4424 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:21:31.863229    4424 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:21:31.866228    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:31.866228    4424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:21:31.866228    4424 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:21:31.866228    4424 cache.go:65] Caching tarball of preloaded images
	I1216 06:21:31.866228    4424 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:21:31.866228    4424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:21:31.866228    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:31.866228    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json: {Name:mkd9bbe5249f898d86f7b7f3961735d2ed71d636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:31.935458    4424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:21:31.935458    4424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:21:31.935988    4424 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:21:31.936042    4424 start.go:360] acquireMachinesLock for kubenet-030800: {Name:mka6ae821c9ad8ee62e1a8eef0f2acffe81ebe64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:21:31.936202    4424 start.go:364] duration metric: took 160.2µs to acquireMachinesLock for "kubenet-030800"
	I1216 06:21:31.936352    4424 start.go:93] Provisioning new machine with config: &{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:31.936477    4424 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:21:30.055522    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:30.078529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:30.109519    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.109519    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:30.112511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:30.144765    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.144765    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:30.149313    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:30.181852    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.181852    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:30.184838    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:30.217464    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.217504    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:30.221332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:30.249301    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.249301    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:30.252441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:30.278998    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.278998    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:30.281997    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:30.312414    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.312414    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:30.318242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:30.357304    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.357361    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:30.357422    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:30.357422    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:30.746929    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:30.746929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:30.795665    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:30.795746    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:30.878691    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:30.878691    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:30.913679    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:30.913679    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:31.010602    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:33.516463    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:33.569431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:33.607164    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.607164    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:33.610177    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:33.645795    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.645795    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:33.649795    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:33.678786    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.678786    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:33.681789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:33.712090    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.712090    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:33.716091    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:33.749207    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.749207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:33.753072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:33.787281    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.787317    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:33.790849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:33.822451    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.822451    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:33.828957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:33.859827    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.859881    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:33.859881    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:33.859929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:33.922584    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:33.922584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:34.010640    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:34.010640    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:34.010640    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:34.047937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:34.047937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:34.108035    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:34.108113    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:31.939854    4424 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:21:31.939854    4424 start.go:159] libmachine.API.Create for "kubenet-030800" (driver="docker")
	I1216 06:21:31.939854    4424 client.go:173] LocalClient.Create starting
	I1216 06:21:31.940866    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.946190    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:21:32.002258    4424 cli_runner.go:211] docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:21:32.006251    4424 network_create.go:284] running [docker network inspect kubenet-030800] to gather additional debugging logs...
	I1216 06:21:32.006251    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800
	W1216 06:21:32.057274    4424 cli_runner.go:211] docker network inspect kubenet-030800 returned with exit code 1
	I1216 06:21:32.057274    4424 network_create.go:287] error running [docker network inspect kubenet-030800]: docker network inspect kubenet-030800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-030800 not found
	I1216 06:21:32.057274    4424 network_create.go:289] output of [docker network inspect kubenet-030800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-030800 not found
	
	** /stderr **
	I1216 06:21:32.061267    4424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:21:32.137401    4424 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.168856    4424 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.184860    4424 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.200856    4424 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.216426    4424 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.232146    4424 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016d96b0}
	I1216 06:21:32.232146    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 06:21:32.235443    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	W1216 06:21:32.288644    4424 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800 returned with exit code 1
	W1216 06:21:32.288644    4424 network_create.go:149] failed to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1216 06:21:32.288644    4424 network_create.go:116] failed to create docker network kubenet-030800 192.168.94.0/24, will retry: subnet is taken
	I1216 06:21:32.308048    4424 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.321168    4424 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015f57d0}
	I1216 06:21:32.321265    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1216 06:21:32.325637    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	I1216 06:21:32.469323    4424 network_create.go:108] docker network kubenet-030800 192.168.103.0/24 created
	I1216 06:21:32.469323    4424 kic.go:121] calculated static IP "192.168.103.2" for the "kubenet-030800" container
	I1216 06:21:32.483125    4424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:21:32.541557    4424 cli_runner.go:164] Run: docker volume create kubenet-030800 --label name.minikube.sigs.k8s.io=kubenet-030800 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:21:32.608360    4424 oci.go:103] Successfully created a docker volume kubenet-030800
	I1216 06:21:32.611360    4424 cli_runner.go:164] Run: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:21:34.117036    4424 cli_runner.go:217] Completed: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.5056549s)
	I1216 06:21:34.117036    4424 oci.go:107] Successfully prepared a docker volume kubenet-030800
	I1216 06:21:34.117036    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:34.117036    4424 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:21:34.121793    4424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:21:37.760556    7800 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:21:37.760556    7800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:21:37.761189    7800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:21:37.761753    7800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:21:37.761881    7800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:21:37.761881    7800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:21:37.764442    7800 out.go:252]   - Generating certificates and keys ...
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:21:37.765188    7800 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:21:37.765955    7800 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:21:37.766018    7800 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:21:37.766124    7800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:21:37.766165    7800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:21:37.766271    7800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:21:37.766333    7800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:21:37.766397    7800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:21:37.766458    7800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:21:37.770151    7800 out.go:252]   - Booting up control plane ...
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:21:37.770817    7800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:21:37.770952    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:21:37.771091    7800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:21:37.771167    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:21:37.771225    7800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:21:37.771366    7800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.004327208s
	I1216 06:21:37.771902    7800 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:21:37.772247    7800 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1216 06:21:37.772484    7800 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:21:37.772735    7800 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:21:37.773067    7800 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.101943404s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.591910767s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002177662s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:21:37.773799    7800 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:21:37.773799    7800 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:21:37.774455    7800 kubeadm.go:319] [mark-control-plane] Marking the node bridge-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:21:37.774523    7800 kubeadm.go:319] [bootstrap-token] Using token: lrkd8c.ky3vlqagn7chac73
	I1216 06:21:37.777890    7800 out.go:252]   - Configuring RBAC rules ...
	I1216 06:21:37.777890    7800 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:21:37.779666    7800 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:21:37.780278    7800 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:21:37.780278    7800 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:21:37.781243    7800 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--control-plane 
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:21:37.782257    7800 cni.go:84] Creating CNI manager for "bridge"
	I1216 06:21:37.785969    7800 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W1216 06:21:35.013402    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:37.791788    7800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 06:21:37.806804    7800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 06:21:37.825807    7800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-030800 minikube.k8s.io/updated_at=2025_12_16T06_21_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=bridge-030800 minikube.k8s.io/primary=true
	I1216 06:21:37.839814    7800 ops.go:34] apiserver oom_adj: -16
	I1216 06:21:38.032186    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:38.534048    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.035804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.534294    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:36.686452    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:36.704466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:36.733394    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.733394    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:36.737348    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:36.773510    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.773510    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:36.776509    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:36.805498    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.805498    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:36.809501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:36.845499    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.845499    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:36.849511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:36.879646    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.879646    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:36.884108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:36.915408    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.915408    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:36.920279    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:36.952754    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.952754    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:36.955734    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:36.987884    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.987884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:36.987884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:36.987884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:37.053646    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:37.053646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:37.093881    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:37.093881    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:37.179527    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:37.179584    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:37.179628    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:37.206261    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:37.206297    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:39.778747    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:39.802598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:39.832179    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.832179    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:39.836704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:39.869121    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.869121    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:39.873774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:39.909668    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.909668    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:39.914691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:39.947830    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.947830    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:39.951594    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:40.034177    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:40.535099    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.034558    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.535126    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.034691    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.533593    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.035143    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.831113    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:44.554108    7800 kubeadm.go:1114] duration metric: took 6.7282073s to wait for elevateKubeSystemPrivileges
	I1216 06:21:44.554108    7800 kubeadm.go:403] duration metric: took 23.3439157s to StartCluster
	I1216 06:21:44.554108    7800 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.554108    7800 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:44.555899    7800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.557179    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:21:44.557179    7800 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:44.557179    7800 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:21:44.557179    7800 addons.go:70] Setting storage-provisioner=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:239] Setting addon storage-provisioner=true in "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:70] Setting default-storageclass=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 host.go:66] Checking if "bridge-030800" exists ...
	I1216 06:21:44.557179    7800 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-030800"
	I1216 06:21:44.557179    7800 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.910438    7800 out.go:179] * Verifying Kubernetes components...
	I1216 06:21:39.982557    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.982557    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:39.986642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:40.018169    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.018169    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:40.021165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:40.051243    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.051243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:40.057090    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:40.084414    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.084414    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:40.084414    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:40.084414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:40.144414    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:40.144414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:40.179632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:40.179632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:40.269800    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:40.270323    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:40.270323    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:40.298399    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:40.298399    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:42.860131    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:42.883733    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:42.914204    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.914204    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:42.918228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:42.950380    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.950459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:42.954335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:42.985562    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.985562    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:42.988741    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:43.016773    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.016773    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:43.020957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:43.059042    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.059042    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:43.062546    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:43.091529    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.091529    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:43.095547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:43.122188    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.122188    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:43.126264    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:43.154603    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.154667    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:43.154687    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:43.154709    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:43.217437    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:43.217437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:43.256550    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:43.256550    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:43.334672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:43.334672    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:43.334672    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:43.363324    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:43.363324    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:44.625758    7800 addons.go:239] Setting addon default-storageclass=true in "bridge-030800"
	I1216 06:21:44.961765    7800 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:21:44.962159    7800 host.go:66] Checking if "bridge-030800" exists ...
	W1216 06:21:45.051007    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:45.413866    7800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:45.416342    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:45.428762    7800 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.428762    7800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:21:45.433231    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.481472    7800 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:45.481472    7800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:21:45.485567    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.487870    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.534738    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:21:45.540734    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.651776    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.743561    7800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:21:45.947134    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:48.661269    7800 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.1264885s)
	I1216 06:21:48.661269    7800 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.2776091s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.1858261s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.9822555s)
	I1216 06:21:48.933443    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:48.974829    7800 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:21:48.977844    7800 addons.go:530] duration metric: took 4.4206041s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:21:48.994296    7800 node_ready.go:35] waiting up to 15m0s for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 node_ready.go:49] node "bridge-030800" is "Ready"
	I1216 06:21:49.024312    7800 node_ready.go:38] duration metric: took 30.0163ms for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:21:49.030307    7800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.051593    7800 api_server.go:72] duration metric: took 4.4943521s to wait for apiserver process to appear ...
	I1216 06:21:49.051593    7800 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:21:49.051593    7800 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56268/healthz ...
	I1216 06:21:49.061499    7800 api_server.go:279] https://127.0.0.1:56268/healthz returned 200:
	ok
	I1216 06:21:49.063514    7800 api_server.go:141] control plane version: v1.34.2
	I1216 06:21:49.063514    7800 api_server.go:131] duration metric: took 11.9204ms to wait for apiserver health ...
	I1216 06:21:49.064510    7800 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:21:49.088115    7800 system_pods.go:59] 8 kube-system pods found
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.088115    7800 system_pods.go:74] duration metric: took 23.6038ms to wait for pod list to return data ...
	I1216 06:21:49.088115    7800 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:21:49.094110    7800 default_sa.go:45] found service account: "default"
	I1216 06:21:49.094110    7800 default_sa.go:55] duration metric: took 5.9949ms for default service account to be created ...
	I1216 06:21:49.094110    7800 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:21:49.100097    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.100097    7800 retry.go:31] will retry after 202.33386ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.170358    7800 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-030800" context rescaled to 1 replicas
	I1216 06:21:49.310950    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.310950    7800 retry.go:31] will retry after 302.122926ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.630338    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630577    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.630663    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.630695    7800 retry.go:31] will retry after 447.973015ms: missing components: kube-dns, kube-proxy
	I1216 06:21:45.929428    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:45.950755    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:45.982605    8452 logs.go:282] 0 containers: []
	W1216 06:21:45.982605    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:45.987711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:46.020649    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.020649    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:46.024931    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:46.058836    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.058836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:46.066651    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:46.094860    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.094860    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:46.098689    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:46.127246    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.127246    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:46.130937    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:46.159519    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.159519    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:46.163609    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:46.195483    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.195483    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:46.199178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:46.229349    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.229349    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:46.229349    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:46.229349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:46.292392    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:46.292392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:46.330903    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:46.330903    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:46.427283    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:46.427283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:46.427283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:46.458647    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:46.458647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:49.005293    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.027308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:49.061499    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.061499    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:49.064510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:49.094110    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.094110    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:49.097105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:49.126749    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.126749    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:49.132748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:49.169344    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.169344    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:49.174332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:49.209103    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.209103    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:49.214581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:49.248644    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.248644    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:49.251643    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:49.281632    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.281632    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:49.285645    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:49.316955    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.316955    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:49.316955    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:49.317942    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:49.391656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:49.391738    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:49.432724    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:49.432724    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:49.523989    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:49.523989    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:49.523989    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:49.552004    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:49.552004    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:48.467044    4424 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (14.3450525s)
	I1216 06:21:48.467044    4424 kic.go:203] duration metric: took 14.349809s to extract preloaded images to volume ...
	I1216 06:21:48.470844    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:48.730876    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:48.710057733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:48.733867    4424 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:21:48.983392    4424 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-030800 --name kubenet-030800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-030800 --network kubenet-030800 --ip 192.168.103.2 --volume kubenet-030800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:21:49.764686    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Running}}
	I1216 06:21:49.828590    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:49.890595    4424 cli_runner.go:164] Run: docker exec kubenet-030800 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:21:50.004225    4424 oci.go:144] the created container "kubenet-030800" has a running status.
	I1216 06:21:50.005228    4424 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.057161    4424 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:21:50.141101    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:50.207656    4424 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:21:50.207656    4424 kic_runner.go:114] Args: [docker exec --privileged kubenet-030800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:21:50.326664    4424 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.087090    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.087090    7800 retry.go:31] will retry after 426.637768ms: missing components: kube-dns, kube-proxy
	I1216 06:21:50.538640    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.538640    7800 retry.go:31] will retry after 479.139187ms: missing components: kube-dns
	I1216 06:21:51.025065    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.025065    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:51.025193    7800 retry.go:31] will retry after 758.159415ms: missing components: kube-dns
	I1216 06:21:51.791088    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Running
	I1216 06:21:51.791088    7800 system_pods.go:126] duration metric: took 2.6969413s to wait for k8s-apps to be running ...
	I1216 06:21:51.791088    7800 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:21:51.798336    7800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:21:51.818183    7800 system_svc.go:56] duration metric: took 27.0943ms WaitForService to wait for kubelet
	I1216 06:21:51.818183    7800 kubeadm.go:587] duration metric: took 7.2609035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:51.818183    7800 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:21:51.825244    7800 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:21:51.825244    7800 node_conditions.go:123] node cpu capacity is 16
	I1216 06:21:51.825244    7800 node_conditions.go:105] duration metric: took 7.0607ms to run NodePressure ...
	I1216 06:21:51.825244    7800 start.go:242] waiting for startup goroutines ...
	I1216 06:21:51.825244    7800 start.go:247] waiting for cluster config update ...
	I1216 06:21:51.825244    7800 start.go:256] writing updated cluster config ...
	I1216 06:21:51.833706    7800 ssh_runner.go:195] Run: rm -f paused
	I1216 06:21:51.841597    7800 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:21:51.851622    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:21:53.862268    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:52.109148    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:52.140748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:52.186855    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.186855    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:52.193111    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:52.227511    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.227511    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:52.232508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:52.265331    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.265331    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:52.270635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:52.301130    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.301130    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:52.307669    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:52.342623    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.342623    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:52.347794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:52.387246    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.387246    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:52.392629    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:52.445055    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.445143    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:52.449252    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:52.480071    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.480071    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:52.480071    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:52.480071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:52.547115    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:52.547115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:52.590630    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:52.590630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:52.690412    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:52.690412    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:52.690412    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:52.718573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:52.718573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:52.546527    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:52.603159    4424 machine.go:94] provisionDockerMachine start ...
	I1216 06:21:52.606161    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.662674    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.679442    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.679519    4424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:21:52.842464    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:52.842464    4424 ubuntu.go:182] provisioning hostname "kubenet-030800"
	I1216 06:21:52.846473    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.908771    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.908771    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.908771    4424 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-030800 && echo "kubenet-030800" | sudo tee /etc/hostname
	I1216 06:21:53.084692    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:53.088917    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.150284    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.150284    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.150284    4424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-030800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-030800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-030800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:21:53.322772    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:21:53.322772    4424 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:21:53.322772    4424 ubuntu.go:190] setting up certificates
	I1216 06:21:53.322772    4424 provision.go:84] configureAuth start
	I1216 06:21:53.326658    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:53.379472    4424 provision.go:143] copyHostCerts
	I1216 06:21:53.379472    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:21:53.379472    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:21:53.379472    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:21:53.381506    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:21:53.381506    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:21:53.382025    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:21:53.383238    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:21:53.383286    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:21:53.383622    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:21:53.384729    4424 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-030800 san=[127.0.0.1 192.168.103.2 kubenet-030800 localhost minikube]
	I1216 06:21:53.446404    4424 provision.go:177] copyRemoteCerts
	I1216 06:21:53.450578    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:21:53.453632    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.508049    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:53.625841    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:21:53.652177    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 06:21:53.678648    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:21:53.702593    4424 provision.go:87] duration metric: took 379.8156ms to configureAuth
	I1216 06:21:53.702593    4424 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:21:53.703116    4424 config.go:182] Loaded profile config "kubenet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:53.706020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.763080    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.763659    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.763659    4424 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:21:53.941197    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:21:53.941229    4424 ubuntu.go:71] root file system type: overlay
	I1216 06:21:53.941395    4424 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:21:53.945310    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.000318    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.000318    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.000318    4424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:21:54.194977    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:21:54.198986    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.262183    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.262873    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.262912    4424 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:21:55.764091    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:21:54.174803160 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:21:55.764091    4424 machine.go:97] duration metric: took 3.1608879s to provisionDockerMachine
	I1216 06:21:55.764091    4424 client.go:176] duration metric: took 23.8239056s to LocalClient.Create
	I1216 06:21:55.764091    4424 start.go:167] duration metric: took 23.8239056s to libmachine.API.Create "kubenet-030800"
	I1216 06:21:55.764091    4424 start.go:293] postStartSetup for "kubenet-030800" (driver="docker")
	I1216 06:21:55.764091    4424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:21:55.769330    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:21:55.774020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:55.832721    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:55.960433    4424 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:21:55.968801    4424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:21:55.968801    4424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:21:55.969505    4424 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:21:55.973822    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:21:55.985938    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:21:56.011522    4424 start.go:296] duration metric: took 247.4281ms for postStartSetup
	I1216 06:21:56.016962    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.071317    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:56.078704    4424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:21:56.082131    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	W1216 06:21:55.087921    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:56.146380    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.278810    4424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:21:56.289463    4424 start.go:128] duration metric: took 24.3526481s to createHost
	I1216 06:21:56.289463    4424 start.go:83] releasing machines lock for "kubenet-030800", held for 24.352923s
	I1216 06:21:56.293770    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.349762    4424 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:21:56.354527    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.355718    4424 ssh_runner.go:195] Run: cat /version.json
	I1216 06:21:56.359207    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.419217    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.420010    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.548149    4424 ssh_runner.go:195] Run: systemctl --version
	W1216 06:21:56.549226    4424 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:21:56.567514    4424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:21:56.574755    4424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:21:56.580435    4424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:21:56.633416    4424 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:21:56.633416    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:56.633416    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:56.633416    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:56.657618    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1216 06:21:56.658090    4424 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:21:56.658134    4424 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:21:56.678200    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:21:56.690681    4424 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:21:56.695430    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:21:56.714310    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.735757    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:21:56.754647    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.771876    4424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:21:56.790078    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:21:56.810936    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:21:56.828529    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:21:56.859717    4424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:21:56.876724    4424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:21:56.891719    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.036224    4424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:21:57.185425    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:57.185522    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:57.190092    4424 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:21:57.213249    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.239566    4424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:21:57.303231    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.326154    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:21:57.344861    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:57.372889    4424 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:21:57.386009    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:21:57.401220    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1216 06:21:57.422607    4424 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:21:57.590920    4424 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:21:57.727211    4424 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:21:57.727211    4424 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:21:57.751771    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:21:57.772961    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.912458    4424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:21:58.834645    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:21:58.856232    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:21:58.880727    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:58.906712    4424 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:21:59.052553    4424 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:21:59.194941    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.333924    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:21:59.357147    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:21:59.379570    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.513788    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:21:59.631489    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:59.649336    4424 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:21:59.653752    4424 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:21:59.660755    4424 start.go:564] Will wait 60s for crictl version
	I1216 06:21:59.665368    4424 ssh_runner.go:195] Run: which crictl
	I1216 06:21:59.677200    4424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:21:59.717428    4424 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:21:59.720622    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:21:59.765567    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	W1216 06:21:55.865199    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	W1216 06:21:58.365962    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:55.273773    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:55.297441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:55.334351    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.334404    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:55.338338    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:55.372344    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.372344    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:55.375335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:55.429711    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.429711    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:55.432707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:55.463415    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.463415    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:55.466882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:55.495871    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.495871    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:55.499782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:55.530135    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.530135    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:55.534032    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:55.561956    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.561956    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:55.567456    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:55.598684    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.598684    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:55.598684    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:55.598684    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:55.661553    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:55.661553    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:55.699330    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:55.699330    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:55.806271    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:55.806271    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:55.806271    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:55.839937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:55.839937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.400590    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:58.422580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:58.453081    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.453081    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:58.457176    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:58.482739    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.482739    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:58.486288    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:58.516198    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.516198    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:58.520374    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:58.550134    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.550134    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:58.553679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:58.585815    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.585815    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:58.589532    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:58.620180    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.620180    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:58.626021    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:58.656946    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.656946    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:58.659942    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:58.687618    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.687618    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:58.687618    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:58.687618    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:58.777493    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:58.777493    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:58.777493    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:58.805676    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:58.805676    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.860391    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:58.860391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:58.925444    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:58.925444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:59.807579    4424 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:21:59.810667    4424 cli_runner.go:164] Run: docker exec -t kubenet-030800 dig +short host.docker.internal
	I1216 06:21:59.962844    4424 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:21:59.967733    4424 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:21:59.974503    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:21:59.995371    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:00.053937    4424 kubeadm.go:884] updating cluster {Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:22:00.053937    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:22:00.057874    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.094105    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.094105    4424 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:22:00.097332    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.129189    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.129225    4424 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:22:00.129280    4424 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 docker true true} ...
	I1216 06:22:00.129486    4424 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:22:00.132350    4424 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:22:00.208072    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:22:00.208072    4424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:22:00.208072    4424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-030800 NodeName:kubenet-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:22:00.208072    4424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:22:00.213204    4424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:22:00.225061    4424 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:22:00.229012    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:22:00.242127    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (339 bytes)
	I1216 06:22:00.258591    4424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:22:00.278876    4424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 06:22:00.305788    4424 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:22:00.315868    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:22:00.339710    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:22:00.483171    4424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:22:00.505844    4424 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800 for IP: 192.168.103.2
	I1216 06:22:00.505844    4424 certs.go:195] generating shared ca certs ...
	I1216 06:22:00.505844    4424 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.506501    4424 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:22:00.507023    4424 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:22:00.507484    4424 certs.go:257] generating profile certs ...
	I1216 06:22:00.507484    4424 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key
	I1216 06:22:00.507484    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt with IP's: []
	I1216 06:22:00.552695    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt ...
	I1216 06:22:00.552695    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt: {Name:mk4783bd7e1619c0ea341eaca75005ddd88d5b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.553960    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key ...
	I1216 06:22:00.553960    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key: {Name:mk427571c1896a50b896e76c58a633b5512ad44e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.555335    4424 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8
	I1216 06:22:00.555661    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 06:22:00.581299    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 ...
	I1216 06:22:00.581299    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8: {Name:mk9cb22362f0ba7f5c0b5c6877c5c2e8d72eb278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.582304    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 ...
	I1216 06:22:00.582304    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8: {Name:mk2a3e21d232de7f748cffa074c96be0850cc9f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.583303    4424 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt
	I1216 06:22:00.599920    4424 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key
	I1216 06:22:00.600703    4424 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key
	I1216 06:22:00.601353    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt with IP's: []
	I1216 06:22:00.664564    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt ...
	I1216 06:22:00.664564    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt: {Name:mk02eb62f20a18ad60f930ae30a248a87b7cb658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.665010    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key ...
	I1216 06:22:00.665010    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key: {Name:mk8a8b2a6c6b1b3e2e2cc574e01303d6680bf793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.680006    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:22:00.680554    4424 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:22:00.680554    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:22:00.681404    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:22:00.683052    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:22:00.710388    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:22:00.737370    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:22:00.766290    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:22:00.790943    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:22:00.815072    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:22:00.839330    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:22:00.863340    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:22:00.921806    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:22:00.945068    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:22:00.972351    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:22:00.998813    4424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:22:01.025404    4424 ssh_runner.go:195] Run: openssl version
	I1216 06:22:01.039534    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.056142    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:22:01.077227    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.085140    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.089133    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	W1216 06:22:03.871305    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 06:22:03.871305    2100 node_ready.go:38] duration metric: took 6m0.0002926s for node "no-preload-686300" to be "Ready" ...
	I1216 06:22:03.874339    2100 out.go:203] 
	W1216 06:22:03.877050    2100 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:22:03.877050    2100 out.go:285] * 
	W1216 06:22:03.879403    2100 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:22:03.881203    2100 out.go:203] 
	W1216 06:22:00.861344    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:22:01.860562    7800 pod_ready.go:99] pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8s6v4" not found
	I1216 06:22:01.860562    7800 pod_ready.go:86] duration metric: took 10.0087717s for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:01.860562    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tcbrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:03.875170    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:01.467725    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:01.493737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:01.526377    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.526426    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:01.530527    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:01.563582    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.563582    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:01.567007    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:01.606460    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.606460    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:01.610155    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:01.647965    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.647965    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:01.652309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:01.691011    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.691011    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:01.695466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:01.736509    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.736509    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:01.739991    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:01.773642    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.773642    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:01.777617    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:01.811025    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.811141    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:01.811141    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:01.811141    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:01.881533    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:01.881533    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.919632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:01.919632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:02.020491    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:02.020538    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:02.020587    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:02.059933    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:02.060031    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:04.620893    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:04.645761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:04.680596    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.680596    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:04.684611    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:04.712607    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.712607    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:04.716737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:04.744218    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.744218    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:04.748501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:04.787600    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.787668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:04.791349    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:04.825500    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.825547    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:04.829098    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:04.878465    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.878465    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:04.881466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:04.910168    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.910168    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:04.914167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:04.949752    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.949810    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:04.949810    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:04.949873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.143585    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:22:01.161031    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:22:01.179456    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.197251    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:22:01.216028    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.226660    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.230697    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.278644    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:22:01.297647    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:22:01.317326    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.341360    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:22:01.367643    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.377139    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.383754    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.440843    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:22:01.457977    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:22:01.476683    4424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:22:01.483599    4424 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:22:01.484303    4424 kubeadm.go:401] StartCluster: {Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:22:01.490132    4424 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:22:01.529050    4424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:22:01.545461    4424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:22:01.559986    4424 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:22:01.564509    4424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:22:01.575681    4424 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:22:01.575681    4424 kubeadm.go:158] found existing configuration files:
	
	I1216 06:22:01.581349    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:22:01.593595    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:22:01.599386    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:22:01.618969    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:22:01.633516    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:22:01.638266    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:22:01.656598    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:22:01.670398    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:22:01.674972    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:22:01.695466    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:22:01.709055    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:22:01.713665    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:22:01.733357    4424 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:22:01.884136    4424 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:22:01.891445    4424 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:22:01.994223    4424 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 06:22:06.379758    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	W1216 06:22:08.874715    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:04.987656    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:04.987703    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:05.093013    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:05.093013    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:05.093013    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:05.148503    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:05.148503    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:05.222357    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:05.222357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:07.791130    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:07.816699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:07.846890    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.846890    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:07.850551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:07.885179    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.885179    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:07.889622    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:07.920925    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.920925    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:07.925517    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:07.955043    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.955043    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:07.959825    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:07.988928    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.988928    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:07.993735    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:08.025335    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.025335    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:08.031801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:08.063231    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.063231    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:08.068525    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:08.106217    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.106217    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:08.106217    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:08.106217    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:08.173411    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:08.173411    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:08.241764    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:08.241764    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:08.282741    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:08.282741    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:08.376141    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:08.376181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:08.376246    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:22:10.875960    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	W1216 06:22:13.371029    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:13.873624    7800 pod_ready.go:94] pod "coredns-66bc5c9577-tcbrk" is "Ready"
	I1216 06:22:13.873624    7800 pod_ready.go:86] duration metric: took 12.0128951s for pod "coredns-66bc5c9577-tcbrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.879094    7800 pod_ready.go:83] waiting for pod "etcd-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.889705    7800 pod_ready.go:94] pod "etcd-bridge-030800" is "Ready"
	I1216 06:22:13.889705    7800 pod_ready.go:86] duration metric: took 10.6111ms for pod "etcd-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.893578    7800 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.912416    7800 pod_ready.go:94] pod "kube-apiserver-bridge-030800" is "Ready"
	I1216 06:22:13.912416    7800 pod_ready.go:86] duration metric: took 18.8376ms for pod "kube-apiserver-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.917120    7800 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.068093    7800 pod_ready.go:94] pod "kube-controller-manager-bridge-030800" is "Ready"
	I1216 06:22:14.068093    7800 pod_ready.go:86] duration metric: took 150.9707ms for pod "kube-controller-manager-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.266154    7800 pod_ready.go:83] waiting for pod "kube-proxy-pbfkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.666596    7800 pod_ready.go:94] pod "kube-proxy-pbfkb" is "Ready"
	I1216 06:22:14.666596    7800 pod_ready.go:86] duration metric: took 400.436ms for pod "kube-proxy-pbfkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:10.906574    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:10.929977    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:10.963006    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.963006    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:10.966334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:10.995517    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.995517    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:10.998887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:11.027737    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.027771    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:11.034529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:11.070221    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.070221    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:11.075447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:11.105575    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.105575    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:11.108569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:11.143549    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.143549    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:11.146562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:11.178034    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.178034    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:11.181411    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:11.211522    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.211522    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:11.211522    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:11.211522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:11.244289    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:11.244289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:11.295870    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:11.295870    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:11.359418    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:11.360418    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:11.394416    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:11.394416    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:11.489247    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:13.994214    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:14.016691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:14.049641    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.049641    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:14.053607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:14.088893    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.088893    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:14.092847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:14.131857    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.131857    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:14.135845    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:14.168503    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.168503    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:14.172477    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:14.200948    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.200948    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:14.204642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:14.234975    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.234975    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:14.238802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:14.274052    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.274107    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:14.277642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:14.306199    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.306199    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:14.306199    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:14.306199    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:14.374972    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:14.374972    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:14.411356    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:14.411356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:14.498252    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:14.498283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:14.498283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:14.528112    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:14.528112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:14.872200    7800 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:15.267078    7800 pod_ready.go:94] pod "kube-scheduler-bridge-030800" is "Ready"
	I1216 06:22:15.267078    7800 pod_ready.go:86] duration metric: took 394.8723ms for pod "kube-scheduler-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:15.267078    7800 pod_ready.go:40] duration metric: took 23.4251556s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:15.362849    7800 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:22:15.367720    7800 out.go:179] * Done! kubectl is now configured to use "bridge-030800" cluster and "default" namespace by default
	I1216 06:22:17.092050    4424 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:22:17.093065    4424 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:22:17.093065    4424 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:22:17.096059    4424 out.go:252]   - Generating certificates and keys ...
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:22:17.098050    4424 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:22:17.099055    4424 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:22:17.099055    4424 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:22:17.102055    4424 out.go:252]   - Booting up control plane ...
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:22:17.104058    4424 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:22:17.104058    4424 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.507351804s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.957344338s
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.90080548s
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002224001s
	I1216 06:22:17.106067    4424 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:22:17.106067    4424 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:22:17.106067    4424 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:22:17.107057    4424 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:22:17.107057    4424 kubeadm.go:319] [bootstrap-token] Using token: rs8etp.b2dh1vgtia9jcvb4
	I1216 06:22:17.081041    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:17.103056    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:17.137059    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.137059    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:17.141064    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:17.172640    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.172640    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:17.176638    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:17.210910    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.210910    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:17.215347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:17.248986    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.248986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:17.252989    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:17.287415    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.287415    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:17.293572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:17.324098    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.324098    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:17.330062    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:17.366512    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.366512    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:17.370101    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:17.402400    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.402400    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:17.402400    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:17.402400    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:17.455027    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:17.455027    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:17.513029    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:17.513029    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:17.548022    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:17.548022    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:17.645629    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:17.645629    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:17.645629    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:17.110053    4424 out.go:252]   - Configuring RBAC rules ...
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:22:17.111060    4424 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:22:17.111060    4424 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:22:17.111060    4424 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:22:17.111060    4424 kubeadm.go:319] 
	I1216 06:22:17.111060    4424 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:22:17.111060    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:22:17.112052    4424 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:22:17.112052    4424 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:22:17.113053    4424 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:22:17.113053    4424 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:22:17.113053    4424 kubeadm.go:319] 
	I1216 06:22:17.113053    4424 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:22:17.113053    4424 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:22:17.113053    4424 kubeadm.go:319] 
	I1216 06:22:17.113053    4424 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rs8etp.b2dh1vgtia9jcvb4 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--control-plane 
	I1216 06:22:17.114052    4424 kubeadm.go:319] 
	I1216 06:22:17.114052    4424 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:22:17.114052    4424 kubeadm.go:319] 
	I1216 06:22:17.114052    4424 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rs8etp.b2dh1vgtia9jcvb4 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:22:17.114052    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:22:17.114052    4424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:22:17.122049    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:17.122049    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-030800 minikube.k8s.io/updated_at=2025_12_16T06_22_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=kubenet-030800 minikube.k8s.io/primary=true
	I1216 06:22:17.134054    4424 ops.go:34] apiserver oom_adj: -16
	I1216 06:22:17.253989    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:17.753536    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:18.254825    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:18.755186    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:19.255440    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:19.754492    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:20.256463    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:20.753254    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.253896    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.753097    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.858877    4424 kubeadm.go:1114] duration metric: took 4.7437541s to wait for elevateKubeSystemPrivileges
	I1216 06:22:21.858877    4424 kubeadm.go:403] duration metric: took 20.3742909s to StartCluster
	I1216 06:22:21.858877    4424 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:21.858877    4424 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:22:21.861003    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:21.861972    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:22:21.861972    4424 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:22:21.861972    4424 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:22:21.861972    4424 addons.go:70] Setting storage-provisioner=true in profile "kubenet-030800"
	I1216 06:22:21.861972    4424 addons.go:239] Setting addon storage-provisioner=true in "kubenet-030800"
	I1216 06:22:21.861972    4424 addons.go:70] Setting default-storageclass=true in profile "kubenet-030800"
	I1216 06:22:21.861972    4424 config.go:182] Loaded profile config "kubenet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:22:21.861972    4424 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-030800"
	I1216 06:22:21.861972    4424 host.go:66] Checking if "kubenet-030800" exists ...
	I1216 06:22:21.864167    4424 out.go:179] * Verifying Kubernetes components...
	I1216 06:22:21.875224    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:21.875857    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:21.875857    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:22:21.939068    4424 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:22:21.939740    4424 addons.go:239] Setting addon default-storageclass=true in "kubenet-030800"
	I1216 06:22:21.939740    4424 host.go:66] Checking if "kubenet-030800" exists ...
	I1216 06:22:21.942493    4424 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:22:21.942493    4424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:22:21.947611    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:21.951961    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:22.001257    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:22:22.003241    4424 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:22:22.003241    4424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:22:22.006248    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:22.070295    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:22:22.425928    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:22:22.444230    4424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:22:22.451290    4424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:22:22.540661    4424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:22:24.151685    4424 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.7257338s)
	I1216 06:22:24.151837    4424 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:22:24.529871    4424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.0785053s)
	I1216 06:22:24.529983    4424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.0856125s)
	I1216 06:22:24.530029    4424 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.9893406s)
	I1216 06:22:24.535621    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:24.547997    4424 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:22:20.178315    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:20.202308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:20.231344    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.231344    8452 logs.go:284] No container was found matching "kube-apiserver"
E1216 06:25:40.323858   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
	I1216 06:22:20.236317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:20.279459    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.279459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:20.283465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:20.322463    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.322463    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:20.327465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:20.366466    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.366466    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:20.371478    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:20.409468    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.409468    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:20.413471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:20.447432    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.447432    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:20.451099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:20.486103    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.486103    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:20.490094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:20.530098    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.530098    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:20.530098    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:20.530098    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:20.557089    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:20.557089    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:20.606234    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:20.607239    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:20.667498    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:20.667498    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:20.703674    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:20.703674    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:20.796605    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.300916    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:23.324266    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:23.355598    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.355598    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:23.359141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:23.390554    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.390644    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:23.394340    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:23.423019    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.423019    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:23.426772    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:23.456953    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.457021    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:23.460762    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:23.491477    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.491477    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:23.495183    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:23.527107    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.527107    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:23.531577    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:23.559306    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.559306    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:23.563381    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:23.592615    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.592615    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:23.592615    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:23.592615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:23.630103    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:23.630103    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:23.719384    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.719514    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:23.719546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:23.746097    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:23.746097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:23.807727    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:23.807727    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:24.550004    4424 addons.go:530] duration metric: took 2.6879945s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:22:24.591996    4424 node_ready.go:35] waiting up to 15m0s for node "kubenet-030800" to be "Ready" ...
	I1216 06:22:24.646202    4424 node_ready.go:49] node "kubenet-030800" is "Ready"
	I1216 06:22:24.646202    4424 node_ready.go:38] duration metric: took 54.2051ms for node "kubenet-030800" to be "Ready" ...
	I1216 06:22:24.646202    4424 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:22:24.652200    4424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:24.721472    4424 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-030800" context rescaled to 1 replicas
	I1216 06:22:24.735392    4424 api_server.go:72] duration metric: took 2.87338s to wait for apiserver process to appear ...
	I1216 06:22:24.735392    4424 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:22:24.735392    4424 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56385/healthz ...
	I1216 06:22:24.821241    4424 api_server.go:279] https://127.0.0.1:56385/healthz returned 200:
	ok
	I1216 06:22:24.825583    4424 api_server.go:141] control plane version: v1.34.2
	I1216 06:22:24.825583    4424 api_server.go:131] duration metric: took 90.1899ms to wait for apiserver health ...
	I1216 06:22:24.825583    4424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:22:24.832936    4424 system_pods.go:59] 8 kube-system pods found
	I1216 06:22:24.833022    4424 system_pods.go:61] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.833022    4424 system_pods.go:61] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.833022    4424 system_pods.go:61] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:24.833022    4424 system_pods.go:61] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:24.833131    4424 system_pods.go:74] duration metric: took 7.4392ms to wait for pod list to return data ...
	I1216 06:22:24.833131    4424 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:22:24.838156    4424 default_sa.go:45] found service account: "default"
	I1216 06:22:24.838156    4424 default_sa.go:55] duration metric: took 5.0253ms for default service account to be created ...
	I1216 06:22:24.838156    4424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:22:24.844228    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:24.844228    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.844228    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.844228    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:24.844228    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:24.844228    4424 retry.go:31] will retry after 236.325715ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.105694    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.105749    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.105770    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.105770    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.105770    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.105848    4424 retry.go:31] will retry after 372.640753ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.532382    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.532482    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.532513    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.532513    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.532587    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.532611    4424 retry.go:31] will retry after 313.138834ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.853141    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.853661    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.853661    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.853661    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.853661    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.853715    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.853777    4424 retry.go:31] will retry after 472.942865ms: missing components: kube-dns, kube-proxy
	I1216 06:22:26.382913    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:26.404112    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:26.436722    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.436722    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:26.440749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:26.470877    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.470877    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:26.474941    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:26.503887    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.503950    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:26.508216    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:26.538317    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.538317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:26.542754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:26.571126    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.571189    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:26.574883    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:26.604762    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.604762    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:26.608705    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:26.637404    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.637444    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:26.641214    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:26.669720    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.669720    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:26.669720    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:26.669720    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:26.707289    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:26.707289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:26.791357    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:26.791357    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:26.791357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:26.817227    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:26.817227    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:26.865832    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:26.865832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.436231    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:29.459817    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:29.493134    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.493186    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:29.497118    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:29.526722    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.526722    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:29.531481    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:29.561672    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.561718    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:29.566882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:29.595896    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.595947    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:29.599655    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:29.628575    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.628661    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:29.632644    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:29.660164    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.660164    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:29.663829    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:29.694413    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.694413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:29.698152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:29.725286    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.725286    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:29.725355    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:29.725355    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.787721    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:29.787721    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:29.828376    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:29.828376    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:29.916249    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:29.916249    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:29.916249    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:29.942276    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:29.942276    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:26.336069    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:26.336069    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:26.336069    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:26.336069    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Running
	I1216 06:22:26.336069    4424 system_pods.go:126] duration metric: took 1.4978916s to wait for k8s-apps to be running ...
	I1216 06:22:26.336069    4424 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:22:26.342244    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:22:26.368294    4424 system_svc.go:56] duration metric: took 32.1861ms WaitForService to wait for kubelet
	I1216 06:22:26.368345    4424 kubeadm.go:587] duration metric: took 4.5062595s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:22:26.368345    4424 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:22:26.376647    4424 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:22:26.376691    4424 node_conditions.go:123] node cpu capacity is 16
	I1216 06:22:26.376745    4424 node_conditions.go:105] duration metric: took 8.3456ms to run NodePressure ...
	I1216 06:22:26.376745    4424 start.go:242] waiting for startup goroutines ...
	I1216 06:22:26.376745    4424 start.go:247] waiting for cluster config update ...
	I1216 06:22:26.376795    4424 start.go:256] writing updated cluster config ...
	I1216 06:22:26.382913    4424 ssh_runner.go:195] Run: rm -f paused
	I1216 06:22:26.391122    4424 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:26.399112    4424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:28.410987    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	W1216 06:22:30.912607    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	I1216 06:22:32.497361    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:32.517362    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:32.549841    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.549912    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:32.553592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:32.582070    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.582070    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:32.585068    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:32.612095    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.612095    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:32.615889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:32.644953    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.644953    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:32.649025    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:32.676348    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.676429    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:32.680134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:32.708040    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.708040    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:32.712034    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:32.745789    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.745789    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:32.752533    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:32.781449    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.781504    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:32.781504    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:32.781504    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:32.843135    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:32.843135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:32.881564    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:32.881564    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:32.982597    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:32.982597    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:32.982597    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:33.013212    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:33.013212    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:22:33.410898    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	W1216 06:22:35.912070    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	I1216 06:22:35.578218    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:35.601163    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:35.629786    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.629786    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:35.634440    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:35.663168    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.663168    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:35.667699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:35.699050    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.699050    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:35.703224    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:35.736149    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.736149    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:35.741542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:35.772450    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.772450    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:35.776692    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:35.804150    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.804150    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:35.808799    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:35.837871    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.837871    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:35.841100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:35.870769    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.870769    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:35.870769    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:35.870769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:35.934803    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:35.934803    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:35.973201    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:35.973201    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:36.070057    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:36.070057    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:36.070057    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:36.098690    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:36.098690    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:38.663786    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:38.688639    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:38.718646    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.718646    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:38.721640    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:38.751651    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.751651    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:38.754647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:38.784327    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.784327    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:38.788327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:38.815337    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.815337    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:38.818328    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:38.846331    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.846331    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:38.849339    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:38.880297    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.880297    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:38.884227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:38.917702    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.917702    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:38.920940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:38.964973    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.964973    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:38.964973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:38.964973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:38.999971    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:38.999971    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:39.102927    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:39.102927    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:39.102927    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:39.141934    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:39.141934    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:39.210081    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:39.210081    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:36.404625    4424 pod_ready.go:99] pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8qrgg" not found
	I1216 06:22:36.404625    4424 pod_ready.go:86] duration metric: took 10.0053735s for pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:36.404625    4424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7zmc" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:38.415310    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:40.417680    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:41.775031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:41.798710    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:41.831778    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.831778    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:41.835461    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:41.866411    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.866411    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:41.871544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:41.902486    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.902486    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:41.905907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:41.932887    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.932887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:41.935886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:41.965890    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.965890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:41.968887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:42.000893    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.000893    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:42.004876    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:42.043522    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.043591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:42.049149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:42.081678    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.081678    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:42.081678    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:42.081678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:42.140208    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:42.140208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:42.198197    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:42.198197    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:42.241586    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:42.241586    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:42.350617    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:42.350617    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:42.350617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:44.884303    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:44.902304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:44.933421    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.933421    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:44.938149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:44.974292    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.974334    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:44.977512    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	W1216 06:22:42.418518    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:44.914304    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:45.010620    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.010620    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:45.013618    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:45.047628    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.047628    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:45.050627    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:45.089756    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.089850    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:45.096356    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:45.137323    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.137323    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:45.141322    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:45.169330    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.170335    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:45.173321    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:45.202336    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.202336    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:45.202336    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:45.202336    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:45.227331    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:45.227331    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:45.275577    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:45.275630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:45.335206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:45.335206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:45.372222    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:45.372222    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:45.471935    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:47.976320    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:48.004505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:48.037430    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.037430    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:48.040437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:48.076428    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.076477    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:48.081194    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:48.118536    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.118536    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:48.124810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:48.153702    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.153702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:48.159558    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:48.187736    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.187736    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:48.192607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:48.225619    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.225619    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:48.229580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:48.260085    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.260085    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:48.263087    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:48.294313    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.294376    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:48.294376    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:48.294425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:48.345094    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:48.345094    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:48.423576    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:48.423576    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:48.459577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:48.459577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:48.548441    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:48.548441    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:48.548441    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:22:47.414818    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:49.417236    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:51.080561    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:51.104134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:51.132144    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.132144    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:51.136151    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:51.163962    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.163962    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:51.169361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:51.198404    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.198404    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:51.201253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:51.229899    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.229899    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:51.232895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:51.261881    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.261881    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:51.264887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:51.295306    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.295306    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:51.298763    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:51.331779    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.331850    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:51.337211    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:51.367502    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.367502    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:51.367502    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:51.367502    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:51.424226    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:51.424226    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:51.482475    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:51.482475    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:51.527426    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:51.527426    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:51.618444    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:51.618444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:51.618444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.148108    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:54.167190    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:54.198456    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.198456    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:54.202605    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:54.236901    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.236901    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:54.240906    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:54.272541    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.272541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:54.277008    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:54.312764    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.312764    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:54.317359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:54.347564    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.347564    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:54.350557    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:54.377557    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.377557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:54.381564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:54.411585    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.411585    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:54.415565    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:54.447567    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.447567    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:54.447567    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:54.447567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:54.483559    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:54.483559    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:54.589583    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:54.589583    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:54.589583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.617283    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:54.617349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:54.673906    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:54.673990    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 06:22:51.420194    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:53.916809    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:55.919718    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:58.419688    4424 pod_ready.go:94] pod "coredns-66bc5c9577-w7zmc" is "Ready"
	I1216 06:22:58.419688    4424 pod_ready.go:86] duration metric: took 22.0147573s for pod "coredns-66bc5c9577-w7zmc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.424677    4424 pod_ready.go:83] waiting for pod "etcd-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.432677    4424 pod_ready.go:94] pod "etcd-kubenet-030800" is "Ready"
	I1216 06:22:58.432677    4424 pod_ready.go:86] duration metric: took 7.9992ms for pod "etcd-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.435689    4424 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.459477    4424 pod_ready.go:94] pod "kube-apiserver-kubenet-030800" is "Ready"
	I1216 06:22:58.459477    4424 pod_ready.go:86] duration metric: took 22.793ms for pod "kube-apiserver-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.463834    4424 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.611617    4424 pod_ready.go:94] pod "kube-controller-manager-kubenet-030800" is "Ready"
	I1216 06:22:58.611617    4424 pod_ready.go:86] duration metric: took 147.7381ms for pod "kube-controller-manager-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.811398    4424 pod_ready.go:83] waiting for pod "kube-proxy-5b9l9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.211755    4424 pod_ready.go:94] pod "kube-proxy-5b9l9" is "Ready"
	I1216 06:22:59.211755    4424 pod_ready.go:86] duration metric: took 400.3513ms for pod "kube-proxy-5b9l9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.412761    4424 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.811735    4424 pod_ready.go:94] pod "kube-scheduler-kubenet-030800" is "Ready"
	I1216 06:22:59.811813    4424 pod_ready.go:86] duration metric: took 399.0464ms for pod "kube-scheduler-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.811850    4424 pod_ready.go:40] duration metric: took 33.4202632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:59.926671    4424 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:22:59.930035    4424 out.go:179] * Done! kubectl is now configured to use "kubenet-030800" cluster and "default" namespace by default
	I1216 06:22:57.250472    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:57.271468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:57.303800    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.303800    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:57.306801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:57.338803    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.338803    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:57.341800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:57.369018    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.369018    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:57.372806    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:57.403510    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.403510    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:57.406808    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:57.440995    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.440995    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:57.444225    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:57.475612    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.475612    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:57.479607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:57.509842    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.509842    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:57.513186    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:57.545981    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.545981    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:57.545981    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:57.545981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:57.636635    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:57.636635    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:57.636635    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:57.662639    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:57.662639    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:57.720464    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:57.720464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:57.782460    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:57.782460    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.324364    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:00.344368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:00.375358    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.375358    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:00.378355    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:00.410368    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.410368    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:00.414359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:00.442364    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.442364    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:00.446359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:00.476371    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.476371    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:00.479359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:00.508323    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.508323    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:00.512431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:00.550611    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.550611    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:00.553606    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:00.586336    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.586336    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:00.590552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:00.624129    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.624129    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:00.624129    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:00.624129    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:00.685547    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:00.685547    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.737417    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:00.737417    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:00.858025    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:00.858025    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:00.858025    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:00.886607    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:00.886607    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:03.463847    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:03.826614    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:03.881622    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.881622    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:03.887610    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:03.936557    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.937539    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:03.941562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:03.979542    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.979542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:03.983550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:04.020535    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.020535    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:04.025547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:04.064541    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.064541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:04.068548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:04.101538    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.101538    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:04.104544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:04.141752    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.141752    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:04.146757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:04.182755    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.182755    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:04.182755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:04.182755    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:04.305758    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:04.305758    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:04.356425    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:04.356425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:04.487429    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:04.487429    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:04.487429    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:04.526318    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:04.526362    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.087022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:07.110346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:07.137790    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.137790    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:07.141786    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:07.174601    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.174601    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:07.179419    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:07.211656    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.211656    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:07.216897    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:07.250459    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.250459    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:07.254048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:07.282207    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.282207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:07.285851    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:07.313925    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.313925    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:07.317529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:07.348851    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.348851    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:07.353083    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:07.381401    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.381401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:07.381401    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:07.381401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:07.408641    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:07.408641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.450935    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:07.450935    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:07.512733    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:07.512733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:07.552522    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:07.552522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:07.649624    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.155054    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:10.178201    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:10.207068    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.207068    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:10.210473    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:10.239652    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.239652    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:10.242766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:10.274887    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.274887    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:10.278519    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:10.308294    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.308351    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:10.312209    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:10.342572    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.342572    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:10.346437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:10.375569    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.375630    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:10.378861    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:10.405446    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.405446    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:10.410730    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:10.441244    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.441244    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:10.441244    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:10.441244    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:10.502753    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:10.502753    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:10.540437    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:10.540437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:10.626853    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.626853    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:10.626853    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:10.654987    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:10.655058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.213336    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:13.237358    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:13.266636    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.266721    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:13.270023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:13.297369    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.297434    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:13.300782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:13.336039    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.336039    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:13.341919    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:13.370523    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.370523    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:13.374455    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:13.404606    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.404606    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:13.408542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:13.437373    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.437431    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:13.441106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:13.470738    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.470738    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:13.474495    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:13.502203    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.502262    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:13.502262    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:13.502293    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.552578    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:13.552578    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:13.617499    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:13.617499    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:13.660047    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:13.660047    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:13.747316    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:13.747316    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:13.747316    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.284216    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:16.307907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:16.344535    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.344535    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:16.347847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:16.379001    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.379021    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:16.382292    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:16.413093    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.413116    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:16.418012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:16.456763    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.456826    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:16.460621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:16.491671    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.491693    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:16.495352    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:16.527862    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.527862    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:16.534704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:16.564194    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.564243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:16.570369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:16.601444    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.601444    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:16.601444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:16.601444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.631785    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:16.631785    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:16.675190    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:16.675190    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:16.737700    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:16.737700    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:16.775092    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:16.775092    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:16.865026    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.370669    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:19.393524    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:19.423405    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.423513    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:19.427307    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:19.459137    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.459238    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:19.462635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:19.493542    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.493542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:19.497334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:19.526496    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.526496    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:19.529949    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:19.559120    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.559120    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:19.562460    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:19.591305    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.591305    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:19.595794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:19.625200    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.626193    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:19.629187    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:19.657201    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.657201    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:19.657270    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:19.657270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:19.722496    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:19.722496    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:19.761161    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:19.761161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:19.852755    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.853756    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:19.853756    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:19.880330    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:19.881280    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.458668    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:22.483505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:22.514647    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.514647    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:22.518193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:22.551494    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.551494    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:22.555268    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:22.586119    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.586119    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:22.590107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:22.621733    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.621733    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:22.624739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:22.651728    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.651728    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:22.655725    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:22.687826    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.687826    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:22.692217    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:22.727413    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.727413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:22.731318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:22.769477    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.769477    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:22.770462    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:22.770462    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:22.795455    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:22.795455    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.851473    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:22.851473    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:22.911454    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:22.912459    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:22.948112    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:22.948112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:23.039238    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:25.544174    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:25.571784    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:25.610368    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.610422    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:25.615377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:25.651080    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.651129    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:25.655234    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:25.695942    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.695942    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:25.700548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:25.727743    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.727743    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:25.730739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:25.765620    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.765650    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:25.769261    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:25.805072    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.805127    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:25.810318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:25.840307    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.840307    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:25.844490    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:25.888279    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.888279    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:25.888279    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:25.888279    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:25.964206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:25.964206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:26.003275    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:26.003275    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:26.111485    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:26.111485    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:26.111485    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:26.146819    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:26.146819    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:28.694382    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:28.716947    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:28.753062    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.753062    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:28.756810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:28.789692    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.789692    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:28.794681    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:28.823690    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.823690    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:28.827683    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:28.858686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.858686    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:28.861688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:28.891686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.891686    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:28.894684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:28.923683    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.923683    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:28.926684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:28.958314    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.958314    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:28.962325    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:28.991317    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.991317    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:28.991317    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:28.991317    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:29.039348    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:29.039348    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:29.103117    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:29.103117    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:29.148003    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:29.148003    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:29.240448    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:29.240448    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:29.240448    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:31.772923    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:31.796203    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:31.827485    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.827485    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:31.830572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:31.873718    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.873718    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:31.877445    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:31.926391    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.926391    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:31.929391    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:31.964572    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.964572    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:31.968096    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:32.003776    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.003776    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:32.007175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:32.046322    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.046322    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:32.049283    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:32.077299    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.077299    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:32.080289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:32.114717    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.114793    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:32.114793    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:32.114843    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:32.191987    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:32.191987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:32.237143    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:32.237143    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:32.331899    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:32.331899    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:32.331899    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:32.362021    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:32.362021    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:34.918825    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:34.945647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:34.976745    8452 logs.go:282] 0 containers: []
	W1216 06:23:34.976745    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:34.980636    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:35.012295    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.012295    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:35.015295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:35.047289    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.047289    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:35.050289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:35.081492    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.081492    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:35.085580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:35.121645    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.121645    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:35.126840    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:35.167976    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.167976    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:35.170966    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:35.201969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.201969    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:35.204969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:35.232969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.233980    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:35.233980    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:35.233980    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:35.292973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:35.292973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:35.327973    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:35.327973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:35.420114    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:35.420114    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:35.420114    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:35.451148    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:35.451148    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:38.010056    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:38.035506    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:38.071853    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.071853    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:38.075564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:38.106543    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.106543    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:38.109547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:38.143669    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.143669    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:38.152737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:38.191923    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.191923    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:38.195575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:38.225935    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.225935    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:38.228939    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:38.268550    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.268550    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:38.271759    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:38.304387    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.304421    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:38.307849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:38.341968    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.341968    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:38.341968    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:38.341968    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:38.404267    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:38.404267    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:38.443104    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:38.443104    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:38.551474    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:38.551474    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:38.551474    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:38.582843    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:38.582869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.141896    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:41.185331    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:41.218961    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.219548    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:41.223789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:41.252376    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.252376    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:41.255368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:41.285378    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.285378    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:41.288369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:41.318383    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.318383    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:41.321372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:41.349373    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.349373    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:41.353377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:41.390105    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.390105    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:41.393103    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:41.425109    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.425109    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:41.428107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:41.462594    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.462594    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:41.462594    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:41.462594    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:41.492096    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:41.492156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.553755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:41.553806    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:41.622329    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:41.622329    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:41.664016    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:41.664016    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:41.759009    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:44.265223    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:44.286309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:44.319583    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.319583    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:44.324575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:44.358046    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.358114    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:44.361895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:44.390541    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.390541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:44.395354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:44.433163    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.433163    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:44.436754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:44.470605    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.470605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:44.475856    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:44.504412    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.504484    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:44.508013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:44.540170    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.540170    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:44.545802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:44.574593    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.575118    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:44.575181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:44.575181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:44.609181    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:44.609231    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:44.663988    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:44.663988    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:44.737678    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:44.737678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:44.777530    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:44.777530    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:44.868751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:47.373432    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:47.674375    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:47.705067    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.705067    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:47.709193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:47.739921    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.739921    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:47.743656    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:47.771970    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.771970    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:47.776451    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:47.808633    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.808633    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:47.813124    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:47.856079    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.856079    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:47.859452    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:47.891897    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.891897    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:47.895769    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:47.926050    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.926050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:47.929679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:47.962571    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.962571    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:47.962571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:47.962571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:48.026367    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:48.026367    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:48.063580    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:48.063580    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:48.173751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:48.173792    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:48.173792    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:48.199403    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:48.199403    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:50.750699    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:50.774573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:50.804983    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.804983    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:50.808894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:50.838533    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.838533    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:50.842242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:50.873377    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.873377    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:50.877508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:50.907646    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.907646    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:50.912410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:50.943853    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.943853    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:50.950275    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:50.977570    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.977570    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:50.982575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:51.010211    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.010211    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:51.014545    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:51.048584    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.048584    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:51.048584    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:51.048584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:51.112725    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:51.112725    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:51.150854    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:51.150854    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:51.246494    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:51.246535    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:51.246535    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:51.274873    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:51.274873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:53.832981    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:53.857995    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:53.892159    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.892159    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:53.895775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:53.926160    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.926160    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:53.929408    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:53.956482    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.956552    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:53.959711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:53.989788    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.989788    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:53.993230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:54.022506    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.022506    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:54.025409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:54.054974    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.054974    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:54.059372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:54.088015    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.088015    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:54.092123    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:54.121961    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.121961    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:54.121961    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:54.121961    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:54.169232    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:54.169295    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:54.230158    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:54.231156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:54.267713    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:54.267713    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:54.368006    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:54.368006    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:54.368006    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:56.899723    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:56.923149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:56.957635    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.957635    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:56.961499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:56.988363    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.988363    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:56.992371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:57.021993    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.021993    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:57.025544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:57.055718    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.055718    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:57.060969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:57.092456    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.092523    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:57.096418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:57.125588    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.125588    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:57.129665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:57.160663    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.160663    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:57.164518    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:57.196231    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.196281    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:57.196281    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:57.196281    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:57.258973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:57.258973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:57.302939    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:57.302939    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:57.397577    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:57.397577    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:57.397577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:57.434801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:57.434801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:59.991022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:00.014170    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:00.046529    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.046529    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:00.050903    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:00.080796    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.080796    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:00.084418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:00.114858    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.114858    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:00.121404    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:00.152596    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.152596    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:00.156447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:00.183532    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.183648    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:00.187074    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:00.218971    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.218971    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:00.222929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:00.252086    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.252086    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:00.256309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:00.285884    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.285884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:00.285884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:00.285884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:00.364208    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:00.364208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:00.403464    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:00.403464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:00.495864    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:00.495864    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:00.495864    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:00.521592    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:00.521592    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:03.070724    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:03.093858    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:03.127112    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.127112    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:03.131265    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:03.161262    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.161262    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:03.165073    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:03.195882    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.195933    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:03.200488    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:03.230205    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.230205    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:03.234193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:03.263580    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.263629    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:03.267410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:03.297599    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.297652    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:03.300957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:03.329666    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.329720    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:03.333378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:03.365184    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.365236    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:03.365282    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:03.365282    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:03.428385    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:03.428385    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:03.465984    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:03.465984    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:03.557873    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:03.559101    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:03.559101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:03.586791    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:03.586791    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:06.142562    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:06.170227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:06.202672    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.202672    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:06.206691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:06.237624    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.237624    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:06.241559    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:06.267616    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.267616    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:06.271709    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:06.304567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.304567    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:06.308556    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:06.337567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.337567    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:06.344744    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:06.373520    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.373520    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:06.377184    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:06.411936    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.411936    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:06.415789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:06.447263    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.447263    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:06.447263    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:06.447263    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:06.509097    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:06.509097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:06.546188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:06.546188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:06.639923    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:06.639923    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:06.639923    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:06.666485    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:06.666519    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.221249    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:09.244788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:09.276490    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.276490    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:09.280706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:09.309520    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.309520    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:09.313105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:09.339092    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.339092    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:09.343484    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:09.369046    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.369046    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:09.373188    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:09.403810    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.403810    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:09.407108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:09.437156    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.437156    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:09.441754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:09.469752    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.469810    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:09.473378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:09.503754    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.503754    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:09.503754    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:09.503754    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:09.533645    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:09.533718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.587529    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:09.587529    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:09.647801    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:09.647801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:09.686577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:09.686577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:09.782674    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.288199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:12.313967    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:12.344043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.344043    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:12.348347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:12.378683    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.378683    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:12.382106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:12.411599    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.411599    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:12.415131    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:12.445826    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.445873    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:12.450940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:12.481043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.481078    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:12.484800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:12.512969    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.512990    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:12.515915    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:12.548151    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.548228    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:12.551706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:12.584039    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.584039    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:12.584039    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:12.584039    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:12.646680    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:12.646680    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:12.686545    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:12.686545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:12.804767    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.804767    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:12.804767    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:12.831866    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:12.831866    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:15.392415    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:15.416435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:15.445044    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.445044    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:15.449260    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:15.476688    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.476688    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:15.481012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:15.508866    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.508928    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:15.512662    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:15.541002    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.541002    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:15.545363    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:15.574947    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.574991    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:15.578407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:15.604751    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.604751    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:15.608699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:15.639261    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.639338    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:15.642317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:15.674404    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.674404    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:15.674404    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:15.674404    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:15.736218    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:15.736218    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:15.774188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:15.774188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:15.862546    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:15.862546    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:15.862546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:15.888115    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:15.888115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.441031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:18.465207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:18.495447    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.495481    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:18.498929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:18.528412    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.528476    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:18.531543    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:18.560175    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.560175    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:18.563996    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:18.592824    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.592894    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:18.596175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:18.623746    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.623746    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:18.627099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:18.652978    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.653013    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:18.656407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:18.683637    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.683686    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:18.687125    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:18.716903    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.716942    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:18.716964    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:18.716981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:18.743123    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:18.743675    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.794891    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:18.794891    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:18.858345    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:18.858345    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:18.894242    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:18.894242    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:18.979844    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:21.485585    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:21.510290    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:21.539823    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.539823    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:21.543159    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:21.575241    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.575241    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:21.579330    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:21.607389    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.607490    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:21.611023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:21.642332    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.642332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:21.645973    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:21.671339    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.671390    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:21.675048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:21.704483    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.704483    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:21.708499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:21.734944    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.735027    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:21.738688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:21.768890    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.768890    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:21.768987    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:21.768987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:21.800297    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:21.800344    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:21.854571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:21.854571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:21.921230    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:21.921230    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:21.961787    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:21.961787    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:22.060842    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.566957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:24.591909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:24.624010    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.624010    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:24.627550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:24.657938    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.657938    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:24.661917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:24.688848    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.688848    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:24.692388    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:24.722130    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.722165    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:24.725802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:24.754067    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.754134    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:24.757294    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:24.783522    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.783595    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:24.787022    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:24.818531    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.818531    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:24.822200    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:24.851316    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.851371    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:24.851391    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:24.851391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:24.940030    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.941511    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:24.941511    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:24.967127    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:24.967127    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:25.018271    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:25.018358    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:25.077769    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:25.077769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:27.621222    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:27.644179    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:27.675033    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.675033    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:27.678724    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:27.707945    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.707945    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:27.712443    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:27.740635    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.740635    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:27.744539    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:27.775332    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.775332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:27.779621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:27.807884    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.807884    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:27.812207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:27.843877    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.843877    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:27.850126    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:27.878365    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.878365    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:27.883323    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:27.911733    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.911733    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:27.911733    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:27.911733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:27.975085    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:27.975085    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:28.011926    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:28.011926    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:28.117959    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:28.117959    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:28.117959    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:28.146135    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:28.146135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:30.702904    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:30.732783    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:30.768726    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.768726    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:30.772432    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:30.804888    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.804888    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:30.809005    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:30.839403    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.839403    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:30.843668    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:30.874013    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.874013    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:30.878013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:30.906934    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.906934    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:30.911178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:30.936942    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.936942    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:30.940954    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:30.967843    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.967843    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:30.973798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:31.000614    8452 logs.go:282] 0 containers: []
	W1216 06:24:31.000614    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:31.000614    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:31.000614    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:31.063545    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:31.063545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:31.101704    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:31.101704    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:31.201356    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:31.201356    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:31.201356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:31.229634    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:31.229634    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:33.780745    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:33.805148    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:33.836319    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.836319    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:33.840094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:33.872138    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.872167    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:33.875487    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:33.908318    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.908318    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:33.912197    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:33.940179    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.940223    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:33.944152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:33.974912    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.974912    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:33.978728    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:34.004557    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.004557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:34.008971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:34.037591    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.037591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:34.041537    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:34.073153    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.073153    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:34.073153    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:34.073153    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:34.139585    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:34.139585    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:34.177888    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:34.177888    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:34.273589    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:34.273589    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:34.273589    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:34.298805    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:34.298805    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:36.851957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:36.889887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:36.919682    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.919682    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:36.923468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:36.953008    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.953073    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:36.957253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:36.985770    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.985770    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:36.989059    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:37.015702    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.015702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:37.019508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:37.046311    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.046351    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:37.050327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:37.087936    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.087936    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:37.092175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:37.121271    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.121271    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:37.125767    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:37.153753    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.153814    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:37.153814    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:37.153869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:37.218058    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:37.218058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:37.256162    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:37.257161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:37.349292    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:37.349292    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:37.349292    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:37.378861    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:37.379384    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:39.931797    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:39.956069    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:39.991154    8452 logs.go:282] 0 containers: []
	W1216 06:24:39.991154    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:39.994809    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:40.021488    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.021488    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:40.025604    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:40.055464    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.055464    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:40.059576    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:40.085410    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.086402    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:40.090048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:40.120389    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.120389    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:40.125766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:40.159925    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.159962    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:40.163912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:40.190820    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.190820    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:40.194350    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:40.223821    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.223886    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:40.223886    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:40.223886    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:40.292033    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:40.292033    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:40.331274    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:40.331274    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:40.423708    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:40.423708    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:40.423708    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:40.452101    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:40.452101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.005925    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:43.029165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:43.060601    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.060601    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:43.064304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:43.092446    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.092446    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:43.096552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:43.127295    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.127347    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:43.130913    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:43.159919    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.159986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:43.163049    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:43.190310    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.190384    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:43.194093    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:43.223641    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.223641    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:43.227270    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:43.254592    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.254592    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:43.259912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:43.293166    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.293166    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:43.293166    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:43.293166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:43.328685    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:43.328685    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:43.412970    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:43.413012    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:43.413042    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:43.444573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:43.444573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.501857    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:43.501857    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.068154    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:46.095291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:46.125740    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.125740    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:46.131016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:46.160926    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.160926    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:46.164909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:46.192634    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.192634    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:46.196346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:46.224203    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.224203    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:46.228650    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:46.255541    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.255541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:46.259732    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:46.289377    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.289377    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:46.293566    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:46.321342    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.321342    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:46.325492    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:46.352311    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.352342    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:46.352342    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:46.352382    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.416761    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:46.416761    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:46.469641    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:46.469641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:46.580672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:46.581191    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:46.581229    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:46.608166    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:46.608166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:49.162834    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:49.187402    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:49.219893    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.219893    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:49.223424    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:49.252338    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.252338    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:49.255900    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:49.286106    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.286131    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:49.289776    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:49.317141    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.317141    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:49.322761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:49.353605    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.353605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:49.357674    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:49.385747    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.385793    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:49.388757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:49.417812    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.417812    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:49.421500    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:49.452746    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.452797    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:49.452797    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:49.452797    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:49.516432    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:49.516432    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:49.553647    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:49.553647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:49.647049    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:49.647087    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:49.647087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:49.671889    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:49.671889    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:52.224199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:52.248067    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:52.282412    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.282412    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:52.286308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:52.315955    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.315955    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:52.319894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:52.353188    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.353188    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:52.356528    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:52.387579    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.387579    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:52.392336    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:52.421909    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.421909    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:52.425890    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:52.458902    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.458902    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:52.462430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:52.498067    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.498140    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:52.501354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:52.528125    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.528125    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:52.528125    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:52.528125    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:52.593845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:52.593845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:52.632779    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:52.632779    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:52.732902    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:52.732902    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:52.732902    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:52.762437    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:52.762437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.328400    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:55.355014    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:55.387364    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.387364    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:55.391085    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:55.417341    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.417341    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:55.421141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:55.450785    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.450785    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:55.454454    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:55.482484    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.482484    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:55.486100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:55.513682    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.513682    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:55.517291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:55.548548    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.548548    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:55.552971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:55.583380    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.583380    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:55.587471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:55.618619    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.618619    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:55.618619    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:55.618686    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:55.646962    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:55.646962    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.695480    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:55.695480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:55.757470    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:55.757470    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:55.796071    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:55.796071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:55.889833    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.396122    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:58.423573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:58.454757    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.454757    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:58.460430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:58.490597    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.490597    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:58.493832    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:58.523149    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.523149    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:58.526960    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:58.558649    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.558649    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:58.562228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:58.591400    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.591400    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:58.595569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:58.624162    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.624162    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:58.628070    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:58.660578    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.660578    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:58.664236    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:58.693155    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.693155    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:58.693155    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:58.693155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:58.732408    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:58.733409    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:58.823465    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.823465    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:58.823465    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:58.848772    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:58.848772    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:58.900567    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:58.900567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.465828    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:01.490385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:01.520316    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.520316    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:01.524299    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:01.555350    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.555350    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:01.559239    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:01.587077    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.587077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:01.591421    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:01.623853    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.623853    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:01.627746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:01.658165    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.658165    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:01.661588    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:01.703310    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.703310    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:01.709361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:01.740903    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.740903    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:01.744287    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:01.773431    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.773431    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:01.773431    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:01.773431    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:01.863541    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:01.863541    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:01.863541    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:01.891816    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:01.891816    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:01.936351    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:01.936351    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.997563    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:01.997563    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.541470    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:04.565886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:04.595881    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.595881    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:04.599716    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:04.629724    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.629749    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:04.633814    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:04.666020    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.666047    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:04.669510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:04.699730    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.699730    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:04.704016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:04.734540    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.734540    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:04.738414    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:04.765651    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.765651    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:04.769397    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:04.797315    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.797315    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:04.801409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:04.832845    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.832845    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:04.832845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:04.832845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.869617    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:04.869617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:04.958334    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:04.958334    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:04.958334    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:04.983497    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:04.983497    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:05.037861    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:05.037887    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.603239    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:07.626775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:07.655146    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.655146    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:07.658648    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:07.688192    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.688227    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:07.691749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:07.723836    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.723836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:07.727536    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:07.761238    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.761238    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:07.764987    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:07.792890    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.792890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:07.796847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:07.824734    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.824734    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:07.828821    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:07.859399    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.859399    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:07.862780    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:07.893406    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.893406    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:07.893457    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:07.893480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.954656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:07.954656    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:07.992200    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:07.993203    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:08.077979    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:08.077979    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:08.077979    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:08.102718    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:08.102718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:10.662101    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:10.688889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:10.721934    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.721996    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:10.727012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:10.760697    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.760746    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:10.763961    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:10.791222    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.791293    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:10.795121    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:10.826239    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.826317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:10.829753    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:10.857355    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.857355    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:10.861145    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:10.903922    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.903922    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:10.907990    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:10.937216    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.937281    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:10.940707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:10.969086    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.969086    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:10.969086    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:10.969238    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:11.062109    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:11.062109    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:11.062109    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:11.090185    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:11.090185    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:11.141444    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:11.141444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:11.199181    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:11.199181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:13.741347    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:13.766441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:13.800424    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.800424    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:13.805169    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:13.835040    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.835040    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:13.839295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:13.864861    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.866077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:13.869598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:13.898887    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.898887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:13.903167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:13.931208    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.931208    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:13.936649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:13.963722    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.963722    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:13.967474    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:13.998640    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.998640    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:14.002572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:14.031349    8452 logs.go:282] 0 containers: []
	W1216 06:25:14.031401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:14.031401    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:14.031401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:14.124587    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:14.124587    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:14.124714    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:14.153583    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:14.153583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:14.202636    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:14.202636    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:14.260591    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:14.260591    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:16.808603    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:16.833787    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:16.864300    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.864300    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:16.868592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:16.897549    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.897549    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:16.900917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:16.931516    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.931557    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:16.936698    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:16.965053    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.965053    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:16.969015    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:16.997017    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.997017    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:17.000551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:17.028733    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.028733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:17.032830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:17.062242    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.062242    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:17.066193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:17.096111    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.096186    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:17.096186    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:17.096243    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:17.126801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:17.126801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:17.178392    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:17.178392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:17.239223    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:17.239223    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:17.276363    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:17.277364    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:17.362910    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:19.869062    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:19.894371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:19.924915    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.924915    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:19.929351    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:19.956535    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.956535    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:19.960534    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:19.989334    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.989334    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:19.993202    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:20.021108    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.021108    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:20.025230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:20.054251    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.054251    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:20.057788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:20.088787    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.088860    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:20.092250    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:20.120577    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.120577    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:20.123857    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:20.153015    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.153015    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:20.153015    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:20.153015    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:20.241391    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:20.241391    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:20.241391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:20.267492    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:20.267554    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:20.321240    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:20.321880    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:20.384978    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:20.384978    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:22.926087    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:22.949774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:22.982854    8452 logs.go:282] 0 containers: []
	W1216 06:25:22.982854    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:22.986923    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:23.017638    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.017638    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:23.021130    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:23.052442    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.052667    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:23.058175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:23.085210    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.085210    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:23.089664    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:23.120747    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.120795    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:23.124581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:23.150600    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.150600    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:23.154602    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:23.182147    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.182147    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:23.185649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:23.217087    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.217087    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:23.217087    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:23.217087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:23.280619    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:23.280619    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:23.318090    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:23.318090    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:23.406270    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:23.406270    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:23.406270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:23.435128    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:23.435128    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:25.989934    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:26.012706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:26.043141    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.043141    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:26.047435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:26.075985    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.075985    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:26.079830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:26.110575    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.110575    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:26.113774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:26.144668    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.144668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:26.148428    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:26.175392    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.175392    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:26.179120    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:26.211067    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.211067    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:26.215072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:26.243555    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.243586    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:26.246934    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:26.279876    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.279876    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:26.279876    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:26.279876    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:26.387447    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:26.387488    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:26.387537    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:26.413896    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:26.413896    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:26.462318    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:26.462318    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:26.527832    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:26.527832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.072565    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:29.096390    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:29.127989    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.127989    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:29.131385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:29.158741    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.158741    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:29.162538    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:29.190346    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.190346    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:29.193798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:29.222234    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.222234    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:29.225740    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:29.252553    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.252553    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:29.256489    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:29.285679    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.285733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:29.289742    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:29.320841    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.321050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:29.324841    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:29.352461    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.352587    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:29.352615    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:29.352615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:29.419045    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:29.419045    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.457659    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:29.457659    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:29.544155    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:29.544155    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:29.544155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:29.571612    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:29.571646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:32.139910    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:32.164438    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:32.196526    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.196526    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:32.200231    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:32.226279    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.226279    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:32.230146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:32.257831    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.257831    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:32.262665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:32.293641    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.293641    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:32.297746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:32.327055    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.327055    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:32.331274    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:32.362206    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.362206    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:32.365146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:32.394600    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.394600    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:32.400058    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:32.428075    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.428075    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:32.428075    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:32.428075    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:32.491661    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:32.491661    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:32.528847    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:32.528847    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:32.616464    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:32.616464    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:32.616464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:32.642397    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:32.642397    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:35.191852    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:35.225285    8452 out.go:203] 
	W1216 06:25:35.227244    8452 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1216 06:25:35.227244    8452 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1216 06:25:35.227244    8452 out.go:285] * Related issues:
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1216 06:25:35.230096    8452 out.go:203] 
	
	
	==> Docker <==
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162855054Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162940064Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162949966Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162955666Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162961567Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.163040877Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.163140989Z" level=info msg="Initializing buildkit"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.281453678Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.293658962Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.293830383Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.293958199Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.294017906Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:19:30 newest-cni-256200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:19:31 newest-cni-256200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:19:31 newest-cni-256200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:39.359927   19808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:39.361231   19808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:39.362214   19808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:39.363944   19808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:39.365469   19808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.633501] CPU: 10 PID: 466820 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f865800db20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f865800daf6.
	[  +0.000001] RSP: 002b:00007ffc8c624780 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000033] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.839091] CPU: 12 PID: 466960 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fa6af131b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fa6af131af6.
	[  +0.000001] RSP: 002b:00007ffe97387e50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 06:22] tmpfs: Unknown parameter 'noswap'
	[  +9.428310] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 06:25:39 up  2:02,  0 user,  load average: 1.41, 3.32, 3.88
	Linux newest-cni-256200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:25:36 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:37 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 16 06:25:37 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:37 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:37 newest-cni-256200 kubelet[19641]: E1216 06:25:37.125404   19641 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:37 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:37 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:37 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 16 06:25:37 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:37 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:37 newest-cni-256200 kubelet[19654]: E1216 06:25:37.880262   19654 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:37 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:37 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:38 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 16 06:25:38 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:38 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:38 newest-cni-256200 kubelet[19682]: E1216 06:25:38.605557   19682 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:38 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:38 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:39 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 16 06:25:39 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:39 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:39 newest-cni-256200 kubelet[19798]: E1216 06:25:39.359826   19798 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:39 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:39 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200
E1216 06:25:41.609271   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200: exit status 2 (606.7905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-256200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (381.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:22:19.006153   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:22:43.893759   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:45.079851   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:45.087049   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:45.098992   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:45.121331   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:45.164323   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:45.247338   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:45.408784   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:45.731185   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:46.374150   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:22:47.656162   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:50.217787   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:55.340044   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:23:20.283833   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:20.290627   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:20.301983   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:20.323723   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:20.365890   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:20.448323   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:20.610572   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:20.932488   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:21.574571   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:22.856464   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:25.418988   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:26.064011   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:23:30.541407   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:34.887333   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:23:40.784121   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:23:40.929404   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:24:01.266059   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:04.008526   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:07.026452   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:24:42.229318   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:24:52.004453   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:52.011275   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:52.022997   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:52.044854   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:52.087076   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:52.169149   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:52.331197   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:52.653751   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:53.296053   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:54.578244   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:55.727031   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:24:57.140370   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:25:02.262485   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:25:12.504628   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:25:27.082384   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:25:28.949908   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:25:32.986888   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:25:59.539431   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:26:02.144169   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:26:04.151846   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:26:13.954426   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:26:18.330078   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:26:20.021679   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:26:24.773725   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:27:00.984266   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:01.855481   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:27:16.457068   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:16.463977   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:16.476216   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:16.497795   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:16.540233   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:16.621935   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:16.783370   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:17.106074   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:17.748768   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:27:19.030665   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:21.592643   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:26.715338   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:27:35.877891   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:36.957373   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:27:41.421122   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:43.897909   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:27:45.084135   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:27:57.439986   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:28:01.099317   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:01.106100   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:01.118375   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:01.140758   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:01.182300   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:01.264188   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:01.426111   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:01.748671   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:02.390627   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:03.672465   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:06.234563   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:07.179930   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:28:11.357549   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:12.793527   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:28:20.287655   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:21.599643   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:22.907175   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:28:38.402292   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:28:42.082435   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:28:47.996442   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:29:04.012543   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:29:06.973340   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:29:23.044762   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:29:38.811808   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:29:52.008319   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:29:55.731156   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:30:00.324957   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:30:19.721996   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:30:39.038135   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:30:44.967779   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:30:57.068237   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:31:02.147953   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:31:06.751740   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300: exit status 2 (605.2813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-686300" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-686300
helpers_test.go:244: (dbg) docker inspect no-preload-686300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc",
	        "Created": "2025-12-16T06:04:57.603416998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 408764,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:15:50.357035984Z",
	            "FinishedAt": "2025-12-16T06:15:46.555763422Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hosts",
	        "LogPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc-json.log",
	        "Name": "/no-preload-686300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-686300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-686300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-686300",
	                "Source": "/var/lib/docker/volumes/no-preload-686300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-686300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-686300",
	                "name.minikube.sigs.k8s.io": "no-preload-686300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58679b470f3820ec221a43ce0cb2eeb96c16084feb347cd3733ff5e676214bcf",
	            "SandboxKey": "/var/run/docker/netns/58679b470f38",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55112"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55113"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55114"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-686300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0c88c2c552749da4ec31002e488ccc5e8184c9ce7ba360033e35a2aa2c5aead9",
	                    "EndpointID": "43959eb122225f782ad58d938dd1f7bfc24c45960ef7507609ea418938e5d63c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-686300",
	                        "f98924d46fbc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300: exit status 2 (601.0266ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25: (1.428644s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-030800 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status docker --all --full --no-pager          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat docker --no-pager                          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/docker/daemon.json                              │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo docker system info                                       │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat cri-docker --no-pager                      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cri-dockerd --version                                    │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status containerd --all --full --no-pager      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat containerd --no-pager                      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /lib/systemd/system/containerd.service               │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/containerd/config.toml                          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo containerd config dump                                   │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status crio --all --full --no-pager            │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat crio --no-pager                            │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo crio config                                              │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete  │ -p kubenet-030800                                                               │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image   │ newest-cni-256200 image list --format=json                                      │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ pause   │ -p newest-cni-256200 --alsologtostderr -v=1                                     │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ unpause │ -p newest-cni-256200 --alsologtostderr -v=1                                     │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ delete  │ -p newest-cni-256200                                                            │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ delete  │ -p newest-cni-256200                                                            │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:21:31
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:21:31.068463    4424 out.go:360] Setting OutFile to fd 1300 ...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.112163    4424 out.go:374] Setting ErrFile to fd 1224...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.126168    4424 out.go:368] Setting JSON to false
	I1216 06:21:31.128157    4424 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7112,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:21:31.129155    4424 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:21:31.133155    4424 out.go:179] * [kubenet-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:21:31.136368    4424 notify.go:221] Checking for updates...
	I1216 06:21:31.137751    4424 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:31.140914    4424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:21:31.143313    4424 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:21:31.145626    4424 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:21:31.147629    4424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:21:31.150478    4424 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151727    4424 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:21:31.272417    4424 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:21:31.275875    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.534539    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.516919297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.537553    4424 out.go:179] * Using the docker driver based on user configuration
	I1216 06:21:31.541211    4424 start.go:309] selected driver: docker
	I1216 06:21:31.541254    4424 start.go:927] validating driver "docker" against <nil>
	I1216 06:21:31.541286    4424 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:21:31.597589    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.842240    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.823958826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.842240    4424 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:21:31.843240    4424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:31.846236    4424 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:21:31.848222    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:21:31.848222    4424 start.go:353] cluster config:
	{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:21:31.851222    4424 out.go:179] * Starting "kubenet-030800" primary control-plane node in "kubenet-030800" cluster
	I1216 06:21:31.860233    4424 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:21:31.863229    4424 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:21:31.866228    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:31.866228    4424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:21:31.866228    4424 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:21:31.866228    4424 cache.go:65] Caching tarball of preloaded images
	I1216 06:21:31.866228    4424 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:21:31.866228    4424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:21:31.866228    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:31.866228    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json: {Name:mkd9bbe5249f898d86f7b7f3961735d2ed71d636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:31.935458    4424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:21:31.935458    4424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:21:31.935988    4424 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:21:31.936042    4424 start.go:360] acquireMachinesLock for kubenet-030800: {Name:mka6ae821c9ad8ee62e1a8eef0f2acffe81ebe64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:21:31.936202    4424 start.go:364] duration metric: took 160.2µs to acquireMachinesLock for "kubenet-030800"
	I1216 06:21:31.936352    4424 start.go:93] Provisioning new machine with config: &{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:31.936477    4424 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:21:30.055522    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:30.078529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:30.109519    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.109519    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:30.112511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:30.144765    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.144765    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:30.149313    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:30.181852    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.181852    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:30.184838    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:30.217464    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.217504    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:30.221332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:30.249301    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.249301    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:30.252441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:30.278998    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.278998    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:30.281997    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:30.312414    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.312414    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:30.318242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:30.357304    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.357361    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:30.357422    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:30.357422    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:30.746929    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:30.746929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:30.795665    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:30.795746    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:30.878691    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:30.878691    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:30.913679    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:30.913679    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:31.010602    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:33.516463    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:33.569431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:33.607164    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.607164    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:33.610177    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:33.645795    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.645795    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:33.649795    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:33.678786    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.678786    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:33.681789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:33.712090    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.712090    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:33.716091    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:33.749207    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.749207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:33.753072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:33.787281    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.787317    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:33.790849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:33.822451    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.822451    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:33.828957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:33.859827    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.859881    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:33.859881    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:33.859929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:33.922584    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:33.922584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:34.010640    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:34.010640    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:34.010640    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:34.047937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:34.047937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:34.108035    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:34.108113    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:31.939854    4424 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:21:31.939854    4424 start.go:159] libmachine.API.Create for "kubenet-030800" (driver="docker")
	I1216 06:21:31.939854    4424 client.go:173] LocalClient.Create starting
	I1216 06:21:31.940866    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.946190    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:21:32.002258    4424 cli_runner.go:211] docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:21:32.006251    4424 network_create.go:284] running [docker network inspect kubenet-030800] to gather additional debugging logs...
	I1216 06:21:32.006251    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800
	W1216 06:21:32.057274    4424 cli_runner.go:211] docker network inspect kubenet-030800 returned with exit code 1
	I1216 06:21:32.057274    4424 network_create.go:287] error running [docker network inspect kubenet-030800]: docker network inspect kubenet-030800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-030800 not found
	I1216 06:21:32.057274    4424 network_create.go:289] output of [docker network inspect kubenet-030800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-030800 not found
	
	** /stderr **
	I1216 06:21:32.061267    4424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:21:32.137401    4424 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.168856    4424 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.184860    4424 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.200856    4424 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.216426    4424 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.232146    4424 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016d96b0}
	I1216 06:21:32.232146    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 06:21:32.235443    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	W1216 06:21:32.288644    4424 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800 returned with exit code 1
	W1216 06:21:32.288644    4424 network_create.go:149] failed to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1216 06:21:32.288644    4424 network_create.go:116] failed to create docker network kubenet-030800 192.168.94.0/24, will retry: subnet is taken
	I1216 06:21:32.308048    4424 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.321168    4424 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015f57d0}
	I1216 06:21:32.321265    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1216 06:21:32.325637    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	I1216 06:21:32.469323    4424 network_create.go:108] docker network kubenet-030800 192.168.103.0/24 created
	I1216 06:21:32.469323    4424 kic.go:121] calculated static IP "192.168.103.2" for the "kubenet-030800" container
	I1216 06:21:32.483125    4424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:21:32.541557    4424 cli_runner.go:164] Run: docker volume create kubenet-030800 --label name.minikube.sigs.k8s.io=kubenet-030800 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:21:32.608360    4424 oci.go:103] Successfully created a docker volume kubenet-030800
	I1216 06:21:32.611360    4424 cli_runner.go:164] Run: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:21:34.117036    4424 cli_runner.go:217] Completed: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.5056549s)
	I1216 06:21:34.117036    4424 oci.go:107] Successfully prepared a docker volume kubenet-030800
	I1216 06:21:34.117036    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:34.117036    4424 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:21:34.121793    4424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:21:37.760556    7800 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:21:37.760556    7800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:21:37.761189    7800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:21:37.761753    7800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:21:37.761881    7800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:21:37.761881    7800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:21:37.764442    7800 out.go:252]   - Generating certificates and keys ...
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:21:37.765188    7800 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:21:37.765955    7800 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:21:37.766018    7800 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:21:37.766124    7800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:21:37.766165    7800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:21:37.766271    7800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:21:37.766333    7800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:21:37.766397    7800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:21:37.766458    7800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:21:37.770151    7800 out.go:252]   - Booting up control plane ...
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:21:37.770817    7800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:21:37.770952    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:21:37.771091    7800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:21:37.771167    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:21:37.771225    7800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:21:37.771366    7800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.004327208s
	I1216 06:21:37.771902    7800 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:21:37.772247    7800 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1216 06:21:37.772484    7800 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:21:37.772735    7800 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:21:37.773067    7800 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.101943404s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.591910767s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002177662s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:21:37.773799    7800 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:21:37.773799    7800 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:21:37.774455    7800 kubeadm.go:319] [mark-control-plane] Marking the node bridge-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:21:37.774523    7800 kubeadm.go:319] [bootstrap-token] Using token: lrkd8c.ky3vlqagn7chac73
	I1216 06:21:37.777890    7800 out.go:252]   - Configuring RBAC rules ...
	I1216 06:21:37.777890    7800 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:21:37.779666    7800 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:21:37.780278    7800 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:21:37.780278    7800 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:21:37.781243    7800 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--control-plane 
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:21:37.782257    7800 cni.go:84] Creating CNI manager for "bridge"
	I1216 06:21:37.785969    7800 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W1216 06:21:35.013402    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:37.791788    7800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 06:21:37.806804    7800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 06:21:37.825807    7800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-030800 minikube.k8s.io/updated_at=2025_12_16T06_21_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=bridge-030800 minikube.k8s.io/primary=true
	I1216 06:21:37.839814    7800 ops.go:34] apiserver oom_adj: -16
	I1216 06:21:38.032186    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:38.534048    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.035804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.534294    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:36.686452    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:36.704466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:36.733394    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.733394    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:36.737348    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:36.773510    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.773510    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:36.776509    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:36.805498    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.805498    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:36.809501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:36.845499    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.845499    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:36.849511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:36.879646    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.879646    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:36.884108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:36.915408    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.915408    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:36.920279    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:36.952754    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.952754    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:36.955734    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:36.987884    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.987884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:36.987884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:36.987884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:37.053646    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:37.053646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:37.093881    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:37.093881    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:37.179527    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:37.179584    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:37.179628    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:37.206261    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:37.206297    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:39.778747    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:39.802598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:39.832179    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.832179    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:39.836704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:39.869121    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.869121    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:39.873774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:39.909668    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.909668    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:39.914691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:39.947830    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.947830    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:39.951594    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:40.034177    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:40.535099    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.034558    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.535126    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.034691    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.533593    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.035143    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.831113    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:44.554108    7800 kubeadm.go:1114] duration metric: took 6.7282073s to wait for elevateKubeSystemPrivileges
	I1216 06:21:44.554108    7800 kubeadm.go:403] duration metric: took 23.3439157s to StartCluster
	I1216 06:21:44.554108    7800 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.554108    7800 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:44.555899    7800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.557179    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:21:44.557179    7800 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:44.557179    7800 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:21:44.557179    7800 addons.go:70] Setting storage-provisioner=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:239] Setting addon storage-provisioner=true in "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:70] Setting default-storageclass=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 host.go:66] Checking if "bridge-030800" exists ...
	I1216 06:21:44.557179    7800 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-030800"
	I1216 06:21:44.557179    7800 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.910438    7800 out.go:179] * Verifying Kubernetes components...
	I1216 06:21:39.982557    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.982557    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:39.986642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:40.018169    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.018169    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:40.021165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:40.051243    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.051243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:40.057090    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:40.084414    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.084414    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:40.084414    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:40.084414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:40.144414    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:40.144414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:40.179632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:40.179632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:40.269800    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:40.270323    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:40.270323    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:40.298399    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:40.298399    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:42.860131    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:42.883733    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:42.914204    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.914204    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:42.918228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:42.950380    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.950459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:42.954335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:42.985562    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.985562    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:42.988741    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:43.016773    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.016773    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:43.020957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:43.059042    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.059042    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:43.062546    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:43.091529    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.091529    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:43.095547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:43.122188    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.122188    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:43.126264    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:43.154603    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.154667    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:43.154687    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:43.154709    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:43.217437    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:43.217437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:43.256550    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:43.256550    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:43.334672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:43.334672    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:43.334672    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:43.363324    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:43.363324    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:44.625758    7800 addons.go:239] Setting addon default-storageclass=true in "bridge-030800"
	I1216 06:21:44.961765    7800 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:21:44.962159    7800 host.go:66] Checking if "bridge-030800" exists ...
	W1216 06:21:45.051007    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:45.413866    7800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:45.416342    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:45.428762    7800 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.428762    7800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:21:45.433231    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.481472    7800 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:45.481472    7800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:21:45.485567    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.487870    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.534738    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:21:45.540734    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.651776    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.743561    7800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:21:45.947134    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:48.661269    7800 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.1264885s)
	I1216 06:21:48.661269    7800 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.2776091s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.1858261s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.9822555s)
	I1216 06:21:48.933443    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:48.974829    7800 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:21:48.977844    7800 addons.go:530] duration metric: took 4.4206041s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:21:48.994296    7800 node_ready.go:35] waiting up to 15m0s for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 node_ready.go:49] node "bridge-030800" is "Ready"
	I1216 06:21:49.024312    7800 node_ready.go:38] duration metric: took 30.0163ms for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:21:49.030307    7800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.051593    7800 api_server.go:72] duration metric: took 4.4943521s to wait for apiserver process to appear ...
	I1216 06:21:49.051593    7800 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:21:49.051593    7800 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56268/healthz ...
	I1216 06:21:49.061499    7800 api_server.go:279] https://127.0.0.1:56268/healthz returned 200:
	ok
	I1216 06:21:49.063514    7800 api_server.go:141] control plane version: v1.34.2
	I1216 06:21:49.063514    7800 api_server.go:131] duration metric: took 11.9204ms to wait for apiserver health ...
	I1216 06:21:49.064510    7800 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:21:49.088115    7800 system_pods.go:59] 8 kube-system pods found
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.088115    7800 system_pods.go:74] duration metric: took 23.6038ms to wait for pod list to return data ...
	I1216 06:21:49.088115    7800 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:21:49.094110    7800 default_sa.go:45] found service account: "default"
	I1216 06:21:49.094110    7800 default_sa.go:55] duration metric: took 5.9949ms for default service account to be created ...
	I1216 06:21:49.094110    7800 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:21:49.100097    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.100097    7800 retry.go:31] will retry after 202.33386ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.170358    7800 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-030800" context rescaled to 1 replicas
	I1216 06:21:49.310950    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.310950    7800 retry.go:31] will retry after 302.122926ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.630338    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630577    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.630663    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.630695    7800 retry.go:31] will retry after 447.973015ms: missing components: kube-dns, kube-proxy
	I1216 06:21:45.929428    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:45.950755    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:45.982605    8452 logs.go:282] 0 containers: []
	W1216 06:21:45.982605    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:45.987711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:46.020649    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.020649    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:46.024931    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:46.058836    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.058836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:46.066651    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:46.094860    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.094860    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:46.098689    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:46.127246    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.127246    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:46.130937    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:46.159519    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.159519    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:46.163609    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:46.195483    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.195483    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:46.199178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:46.229349    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.229349    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:46.229349    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:46.229349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:46.292392    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:46.292392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:46.330903    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:46.330903    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:46.427283    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:46.427283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:46.427283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:46.458647    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:46.458647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:49.005293    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.027308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:49.061499    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.061499    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:49.064510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:49.094110    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.094110    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:49.097105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:49.126749    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.126749    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:49.132748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:49.169344    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.169344    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:49.174332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:49.209103    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.209103    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:49.214581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:49.248644    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.248644    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:49.251643    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:49.281632    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.281632    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:49.285645    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:49.316955    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.316955    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:49.316955    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:49.317942    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:49.391656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:49.391738    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:49.432724    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:49.432724    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:49.523989    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:49.523989    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:49.523989    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:49.552004    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:49.552004    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:48.467044    4424 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (14.3450525s)
	I1216 06:21:48.467044    4424 kic.go:203] duration metric: took 14.349809s to extract preloaded images to volume ...
	I1216 06:21:48.470844    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:48.730876    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:48.710057733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:48.733867    4424 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:21:48.983392    4424 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-030800 --name kubenet-030800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-030800 --network kubenet-030800 --ip 192.168.103.2 --volume kubenet-030800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:21:49.764686    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Running}}
	I1216 06:21:49.828590    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:49.890595    4424 cli_runner.go:164] Run: docker exec kubenet-030800 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:21:50.004225    4424 oci.go:144] the created container "kubenet-030800" has a running status.
	I1216 06:21:50.005228    4424 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.057161    4424 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:21:50.141101    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:50.207656    4424 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:21:50.207656    4424 kic_runner.go:114] Args: [docker exec --privileged kubenet-030800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:21:50.326664    4424 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.087090    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.087090    7800 retry.go:31] will retry after 426.637768ms: missing components: kube-dns, kube-proxy
	I1216 06:21:50.538640    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.538640    7800 retry.go:31] will retry after 479.139187ms: missing components: kube-dns
	I1216 06:21:51.025065    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.025065    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:51.025193    7800 retry.go:31] will retry after 758.159415ms: missing components: kube-dns
	I1216 06:21:51.791088    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Running
	I1216 06:21:51.791088    7800 system_pods.go:126] duration metric: took 2.6969413s to wait for k8s-apps to be running ...
	I1216 06:21:51.791088    7800 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:21:51.798336    7800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:21:51.818183    7800 system_svc.go:56] duration metric: took 27.0943ms WaitForService to wait for kubelet
	I1216 06:21:51.818183    7800 kubeadm.go:587] duration metric: took 7.2609035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:51.818183    7800 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:21:51.825244    7800 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:21:51.825244    7800 node_conditions.go:123] node cpu capacity is 16
	I1216 06:21:51.825244    7800 node_conditions.go:105] duration metric: took 7.0607ms to run NodePressure ...
	I1216 06:21:51.825244    7800 start.go:242] waiting for startup goroutines ...
	I1216 06:21:51.825244    7800 start.go:247] waiting for cluster config update ...
	I1216 06:21:51.825244    7800 start.go:256] writing updated cluster config ...
	I1216 06:21:51.833706    7800 ssh_runner.go:195] Run: rm -f paused
	I1216 06:21:51.841597    7800 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:21:51.851622    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:21:53.862268    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:52.109148    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:52.140748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:52.186855    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.186855    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:52.193111    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:52.227511    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.227511    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:52.232508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:52.265331    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.265331    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:52.270635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:52.301130    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.301130    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:52.307669    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:52.342623    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.342623    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:52.347794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:52.387246    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.387246    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:52.392629    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:52.445055    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.445143    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:52.449252    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:52.480071    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.480071    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:52.480071    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:52.480071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:52.547115    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:52.547115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:52.590630    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:52.590630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:52.690412    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:52.690412    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:52.690412    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:52.718573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:52.718573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:52.546527    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:52.603159    4424 machine.go:94] provisionDockerMachine start ...
	I1216 06:21:52.606161    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.662674    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.679442    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.679519    4424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:21:52.842464    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:52.842464    4424 ubuntu.go:182] provisioning hostname "kubenet-030800"
	I1216 06:21:52.846473    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.908771    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.908771    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.908771    4424 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-030800 && echo "kubenet-030800" | sudo tee /etc/hostname
	I1216 06:21:53.084692    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:53.088917    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.150284    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.150284    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.150284    4424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-030800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-030800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-030800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:21:53.322772    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:21:53.322772    4424 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:21:53.322772    4424 ubuntu.go:190] setting up certificates
	I1216 06:21:53.322772    4424 provision.go:84] configureAuth start
	I1216 06:21:53.326658    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:53.379472    4424 provision.go:143] copyHostCerts
	I1216 06:21:53.379472    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:21:53.379472    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:21:53.379472    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:21:53.381506    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:21:53.381506    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:21:53.382025    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:21:53.383238    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:21:53.383286    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:21:53.383622    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:21:53.384729    4424 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-030800 san=[127.0.0.1 192.168.103.2 kubenet-030800 localhost minikube]
	I1216 06:21:53.446404    4424 provision.go:177] copyRemoteCerts
	I1216 06:21:53.450578    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:21:53.453632    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.508049    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:53.625841    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:21:53.652177    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 06:21:53.678648    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:21:53.702593    4424 provision.go:87] duration metric: took 379.8156ms to configureAuth
	I1216 06:21:53.702593    4424 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:21:53.703116    4424 config.go:182] Loaded profile config "kubenet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:53.706020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.763080    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.763659    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.763659    4424 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:21:53.941197    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:21:53.941229    4424 ubuntu.go:71] root file system type: overlay
	I1216 06:21:53.941395    4424 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:21:53.945310    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.000318    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.000318    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.000318    4424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:21:54.194977    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:21:54.198986    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.262183    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.262873    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.262912    4424 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:21:55.764091    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:21:54.174803160 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:21:55.764091    4424 machine.go:97] duration metric: took 3.1608879s to provisionDockerMachine
	I1216 06:21:55.764091    4424 client.go:176] duration metric: took 23.8239056s to LocalClient.Create
	I1216 06:21:55.764091    4424 start.go:167] duration metric: took 23.8239056s to libmachine.API.Create "kubenet-030800"
	I1216 06:21:55.764091    4424 start.go:293] postStartSetup for "kubenet-030800" (driver="docker")
	I1216 06:21:55.764091    4424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:21:55.769330    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:21:55.774020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:55.832721    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:55.960433    4424 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:21:55.968801    4424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:21:55.968801    4424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:21:55.969505    4424 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:21:55.973822    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:21:55.985938    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:21:56.011522    4424 start.go:296] duration metric: took 247.4281ms for postStartSetup
	I1216 06:21:56.016962    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.071317    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:56.078704    4424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:21:56.082131    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	W1216 06:21:55.087921    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:56.146380    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.278810    4424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:21:56.289463    4424 start.go:128] duration metric: took 24.3526481s to createHost
	I1216 06:21:56.289463    4424 start.go:83] releasing machines lock for "kubenet-030800", held for 24.352923s
	I1216 06:21:56.293770    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.349762    4424 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:21:56.354527    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.355718    4424 ssh_runner.go:195] Run: cat /version.json
	I1216 06:21:56.359207    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.419217    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.420010    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.548149    4424 ssh_runner.go:195] Run: systemctl --version
	W1216 06:21:56.549226    4424 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:21:56.567514    4424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:21:56.574755    4424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:21:56.580435    4424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:21:56.633416    4424 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:21:56.633416    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:56.633416    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:56.633416    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:56.657618    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1216 06:21:56.658090    4424 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:21:56.658134    4424 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:21:56.678200    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:21:56.690681    4424 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:21:56.695430    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:21:56.714310    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.735757    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:21:56.754647    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.771876    4424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:21:56.790078    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:21:56.810936    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:21:56.828529    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:21:56.859717    4424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:21:56.876724    4424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:21:56.891719    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.036224    4424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:21:57.185425    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:57.185522    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:57.190092    4424 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:21:57.213249    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.239566    4424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:21:57.303231    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.326154    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:21:57.344861    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:57.372889    4424 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:21:57.386009    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:21:57.401220    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1216 06:21:57.422607    4424 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:21:57.590920    4424 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:21:57.727211    4424 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:21:57.727211    4424 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:21:57.751771    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:21:57.772961    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.912458    4424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:21:58.834645    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:21:58.856232    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:21:58.880727    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:58.906712    4424 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:21:59.052553    4424 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:21:59.194941    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.333924    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:21:59.357147    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:21:59.379570    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.513788    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:21:59.631489    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:59.649336    4424 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:21:59.653752    4424 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:21:59.660755    4424 start.go:564] Will wait 60s for crictl version
	I1216 06:21:59.665368    4424 ssh_runner.go:195] Run: which crictl
	I1216 06:21:59.677200    4424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:21:59.717428    4424 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:21:59.720622    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:21:59.765567    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	W1216 06:21:55.865199    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	W1216 06:21:58.365962    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:55.273773    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:55.297441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:55.334351    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.334404    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:55.338338    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:55.372344    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.372344    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:55.375335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:55.429711    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.429711    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:55.432707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:55.463415    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.463415    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:55.466882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:55.495871    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.495871    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:55.499782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:55.530135    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.530135    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:55.534032    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:55.561956    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.561956    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:55.567456    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:55.598684    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.598684    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:55.598684    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:55.598684    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:55.661553    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:55.661553    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:55.699330    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:55.699330    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:55.806271    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:55.806271    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:55.806271    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:55.839937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:55.839937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.400590    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:58.422580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:58.453081    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.453081    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:58.457176    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:58.482739    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.482739    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:58.486288    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:58.516198    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.516198    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:58.520374    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:58.550134    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.550134    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:58.553679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:58.585815    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.585815    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:58.589532    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:58.620180    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.620180    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:58.626021    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:58.656946    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.656946    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:58.659942    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:58.687618    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.687618    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:58.687618    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:58.687618    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:58.777493    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:58.777493    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:58.777493    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:58.805676    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:58.805676    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.860391    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:58.860391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:58.925444    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:58.925444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:59.807579    4424 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:21:59.810667    4424 cli_runner.go:164] Run: docker exec -t kubenet-030800 dig +short host.docker.internal
	I1216 06:21:59.962844    4424 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:21:59.967733    4424 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:21:59.974503    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:21:59.995371    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:00.053937    4424 kubeadm.go:884] updating cluster {Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:22:00.053937    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:22:00.057874    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.094105    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.094105    4424 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:22:00.097332    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.129189    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.129225    4424 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:22:00.129280    4424 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 docker true true} ...
	I1216 06:22:00.129486    4424 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:22:00.132350    4424 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:22:00.208072    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:22:00.208072    4424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:22:00.208072    4424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-030800 NodeName:kubenet-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:22:00.208072    4424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:22:00.213204    4424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:22:00.225061    4424 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:22:00.229012    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:22:00.242127    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (339 bytes)
	I1216 06:22:00.258591    4424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:22:00.278876    4424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 06:22:00.305788    4424 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:22:00.315868    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:22:00.339710    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:22:00.483171    4424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:22:00.505844    4424 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800 for IP: 192.168.103.2
	I1216 06:22:00.505844    4424 certs.go:195] generating shared ca certs ...
	I1216 06:22:00.505844    4424 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.506501    4424 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:22:00.507023    4424 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:22:00.507484    4424 certs.go:257] generating profile certs ...
	I1216 06:22:00.507484    4424 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key
	I1216 06:22:00.507484    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt with IP's: []
	I1216 06:22:00.552695    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt ...
	I1216 06:22:00.552695    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt: {Name:mk4783bd7e1619c0ea341eaca75005ddd88d5b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.553960    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key ...
	I1216 06:22:00.553960    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key: {Name:mk427571c1896a50b896e76c58a633b5512ad44e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.555335    4424 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8
	I1216 06:22:00.555661    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 06:22:00.581299    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 ...
	I1216 06:22:00.581299    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8: {Name:mk9cb22362f0ba7f5c0b5c6877c5c2e8d72eb278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.582304    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 ...
	I1216 06:22:00.582304    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8: {Name:mk2a3e21d232de7f748cffa074c96be0850cc9f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.583303    4424 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt
	I1216 06:22:00.599920    4424 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key
	I1216 06:22:00.600703    4424 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key
	I1216 06:22:00.601353    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt with IP's: []
	I1216 06:22:00.664564    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt ...
	I1216 06:22:00.664564    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt: {Name:mk02eb62f20a18ad60f930ae30a248a87b7cb658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.665010    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key ...
	I1216 06:22:00.665010    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key: {Name:mk8a8b2a6c6b1b3e2e2cc574e01303d6680bf793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.680006    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:22:00.680554    4424 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:22:00.680554    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:22:00.681404    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:22:00.683052    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:22:00.710388    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:22:00.737370    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:22:00.766290    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:22:00.790943    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:22:00.815072    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:22:00.839330    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:22:00.863340    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:22:00.921806    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:22:00.945068    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:22:00.972351    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:22:00.998813    4424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:22:01.025404    4424 ssh_runner.go:195] Run: openssl version
	I1216 06:22:01.039534    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.056142    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:22:01.077227    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.085140    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.089133    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	W1216 06:22:03.871305    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 06:22:03.871305    2100 node_ready.go:38] duration metric: took 6m0.0002926s for node "no-preload-686300" to be "Ready" ...
	I1216 06:22:03.874339    2100 out.go:203] 
	W1216 06:22:03.877050    2100 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:22:03.877050    2100 out.go:285] * 
	W1216 06:22:03.879403    2100 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:22:03.881203    2100 out.go:203] 
	W1216 06:22:00.861344    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:22:01.860562    7800 pod_ready.go:99] pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8s6v4" not found
	I1216 06:22:01.860562    7800 pod_ready.go:86] duration metric: took 10.0087717s for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:01.860562    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tcbrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:03.875170    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:01.467725    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:01.493737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:01.526377    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.526426    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:01.530527    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:01.563582    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.563582    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:01.567007    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:01.606460    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.606460    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:01.610155    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:01.647965    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.647965    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:01.652309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:01.691011    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.691011    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:01.695466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:01.736509    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.736509    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:01.739991    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:01.773642    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.773642    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:01.777617    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:01.811025    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.811141    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:01.811141    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:01.811141    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:01.881533    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:01.881533    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.919632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:01.919632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:02.020491    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:02.020538    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:02.020587    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:02.059933    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:02.060031    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:04.620893    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:04.645761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:04.680596    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.680596    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:04.684611    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:04.712607    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.712607    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:04.716737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:04.744218    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.744218    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:04.748501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:04.787600    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.787668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:04.791349    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:04.825500    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.825547    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:04.829098    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:04.878465    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.878465    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:04.881466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:04.910168    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.910168    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:04.914167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:04.949752    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.949810    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:04.949810    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:04.949873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.143585    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:22:01.161031    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:22:01.179456    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.197251    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:22:01.216028    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.226660    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.230697    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.278644    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:22:01.297647    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:22:01.317326    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.341360    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:22:01.367643    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.377139    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.383754    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.440843    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:22:01.457977    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:22:01.476683    4424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:22:01.483599    4424 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:22:01.484303    4424 kubeadm.go:401] StartCluster: {Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:22:01.490132    4424 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:22:01.529050    4424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:22:01.545461    4424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:22:01.559986    4424 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:22:01.564509    4424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:22:01.575681    4424 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:22:01.575681    4424 kubeadm.go:158] found existing configuration files:
	
	I1216 06:22:01.581349    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:22:01.593595    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:22:01.599386    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:22:01.618969    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:22:01.633516    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:22:01.638266    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:22:01.656598    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:22:01.670398    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:22:01.674972    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:22:01.695466    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:22:01.709055    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:22:01.713665    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:22:01.733357    4424 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:22:01.884136    4424 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:22:01.891445    4424 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:22:01.994223    4424 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 06:22:06.379758    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	W1216 06:22:08.874715    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:04.987656    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:04.987703    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:05.093013    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:05.093013    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:05.093013    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:05.148503    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:05.148503    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:05.222357    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:05.222357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:07.791130    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:07.816699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:07.846890    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.846890    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:07.850551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:07.885179    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.885179    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:07.889622    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:07.920925    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.920925    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:07.925517    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:07.955043    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.955043    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:07.959825    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:07.988928    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.988928    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:07.993735    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:08.025335    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.025335    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:08.031801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:08.063231    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.063231    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:08.068525    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:08.106217    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.106217    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:08.106217    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:08.106217    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:08.173411    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:08.173411    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:08.241764    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:08.241764    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:08.282741    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:08.282741    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:08.376141    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:08.376181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:08.376246    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:22:10.875960    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	W1216 06:22:13.371029    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:13.873624    7800 pod_ready.go:94] pod "coredns-66bc5c9577-tcbrk" is "Ready"
	I1216 06:22:13.873624    7800 pod_ready.go:86] duration metric: took 12.0128951s for pod "coredns-66bc5c9577-tcbrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.879094    7800 pod_ready.go:83] waiting for pod "etcd-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.889705    7800 pod_ready.go:94] pod "etcd-bridge-030800" is "Ready"
	I1216 06:22:13.889705    7800 pod_ready.go:86] duration metric: took 10.6111ms for pod "etcd-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.893578    7800 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.912416    7800 pod_ready.go:94] pod "kube-apiserver-bridge-030800" is "Ready"
	I1216 06:22:13.912416    7800 pod_ready.go:86] duration metric: took 18.8376ms for pod "kube-apiserver-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.917120    7800 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.068093    7800 pod_ready.go:94] pod "kube-controller-manager-bridge-030800" is "Ready"
	I1216 06:22:14.068093    7800 pod_ready.go:86] duration metric: took 150.9707ms for pod "kube-controller-manager-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.266154    7800 pod_ready.go:83] waiting for pod "kube-proxy-pbfkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.666596    7800 pod_ready.go:94] pod "kube-proxy-pbfkb" is "Ready"
	I1216 06:22:14.666596    7800 pod_ready.go:86] duration metric: took 400.436ms for pod "kube-proxy-pbfkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:10.906574    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:10.929977    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:10.963006    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.963006    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:10.966334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:10.995517    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.995517    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:10.998887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:11.027737    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.027771    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:11.034529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:11.070221    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.070221    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:11.075447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:11.105575    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.105575    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:11.108569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:11.143549    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.143549    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:11.146562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:11.178034    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.178034    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:11.181411    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:11.211522    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.211522    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:11.211522    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:11.211522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:11.244289    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:11.244289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:11.295870    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:11.295870    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:11.359418    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:11.360418    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:11.394416    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:11.394416    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:11.489247    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:13.994214    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:14.016691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:14.049641    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.049641    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:14.053607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:14.088893    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.088893    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:14.092847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:14.131857    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.131857    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:14.135845    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:14.168503    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.168503    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:14.172477    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:14.200948    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.200948    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:14.204642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:14.234975    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.234975    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:14.238802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:14.274052    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.274107    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:14.277642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:14.306199    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.306199    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:14.306199    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:14.306199    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:14.374972    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:14.374972    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:14.411356    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:14.411356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:14.498252    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:14.498283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:14.498283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:14.528112    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:14.528112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:14.872200    7800 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:15.267078    7800 pod_ready.go:94] pod "kube-scheduler-bridge-030800" is "Ready"
	I1216 06:22:15.267078    7800 pod_ready.go:86] duration metric: took 394.8723ms for pod "kube-scheduler-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:15.267078    7800 pod_ready.go:40] duration metric: took 23.4251556s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:15.362849    7800 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:22:15.367720    7800 out.go:179] * Done! kubectl is now configured to use "bridge-030800" cluster and "default" namespace by default
	I1216 06:22:17.092050    4424 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:22:17.093065    4424 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:22:17.093065    4424 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:22:17.096059    4424 out.go:252]   - Generating certificates and keys ...
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:22:17.098050    4424 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:22:17.099055    4424 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:22:17.099055    4424 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:22:17.102055    4424 out.go:252]   - Booting up control plane ...
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:22:17.104058    4424 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:22:17.104058    4424 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.507351804s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.957344338s
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.90080548s
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002224001s
	I1216 06:22:17.106067    4424 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:22:17.106067    4424 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:22:17.106067    4424 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:22:17.107057    4424 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:22:17.107057    4424 kubeadm.go:319] [bootstrap-token] Using token: rs8etp.b2dh1vgtia9jcvb4
	I1216 06:22:17.081041    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:17.103056    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:17.137059    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.137059    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:17.141064    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:17.172640    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.172640    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:17.176638    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:17.210910    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.210910    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:17.215347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:17.248986    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.248986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:17.252989    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:17.287415    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.287415    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:17.293572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:17.324098    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.324098    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:17.330062    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:17.366512    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.366512    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:17.370101    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:17.402400    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.402400    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:17.402400    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:17.402400    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:17.455027    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:17.455027    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:17.513029    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:17.513029    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:17.548022    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:17.548022    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:17.645629    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:17.645629    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:17.645629    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:17.110053    4424 out.go:252]   - Configuring RBAC rules ...
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:22:17.111060    4424 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:22:17.111060    4424 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:22:17.111060    4424 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:22:17.111060    4424 kubeadm.go:319] 
	I1216 06:22:17.111060    4424 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:22:17.111060    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:22:17.112052    4424 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:22:17.112052    4424 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:22:17.113053    4424 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:22:17.113053    4424 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:22:17.113053    4424 kubeadm.go:319] 
	I1216 06:22:17.113053    4424 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:22:17.113053    4424 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:22:17.113053    4424 kubeadm.go:319] 
	I1216 06:22:17.113053    4424 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rs8etp.b2dh1vgtia9jcvb4 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--control-plane 
	I1216 06:22:17.114052    4424 kubeadm.go:319] 
	I1216 06:22:17.114052    4424 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:22:17.114052    4424 kubeadm.go:319] 
	I1216 06:22:17.114052    4424 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rs8etp.b2dh1vgtia9jcvb4 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:22:17.114052    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:22:17.114052    4424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:22:17.122049    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:17.122049    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-030800 minikube.k8s.io/updated_at=2025_12_16T06_22_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=kubenet-030800 minikube.k8s.io/primary=true
	I1216 06:22:17.134054    4424 ops.go:34] apiserver oom_adj: -16
	I1216 06:22:17.253989    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:17.753536    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:18.254825    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:18.755186    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:19.255440    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:19.754492    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:20.256463    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:20.753254    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.253896    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.753097    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.858877    4424 kubeadm.go:1114] duration metric: took 4.7437541s to wait for elevateKubeSystemPrivileges
	I1216 06:22:21.858877    4424 kubeadm.go:403] duration metric: took 20.3742909s to StartCluster
	I1216 06:22:21.858877    4424 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:21.858877    4424 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:22:21.861003    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:21.861972    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:22:21.861972    4424 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:22:21.861972    4424 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:22:21.861972    4424 addons.go:70] Setting storage-provisioner=true in profile "kubenet-030800"
	I1216 06:22:21.861972    4424 addons.go:239] Setting addon storage-provisioner=true in "kubenet-030800"
	I1216 06:22:21.861972    4424 addons.go:70] Setting default-storageclass=true in profile "kubenet-030800"
	I1216 06:22:21.861972    4424 config.go:182] Loaded profile config "kubenet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:22:21.861972    4424 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-030800"
	I1216 06:22:21.861972    4424 host.go:66] Checking if "kubenet-030800" exists ...
	I1216 06:22:21.864167    4424 out.go:179] * Verifying Kubernetes components...
	I1216 06:22:21.875224    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:21.875857    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:21.875857    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:22:21.939068    4424 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:22:21.939740    4424 addons.go:239] Setting addon default-storageclass=true in "kubenet-030800"
	I1216 06:22:21.939740    4424 host.go:66] Checking if "kubenet-030800" exists ...
	I1216 06:22:21.942493    4424 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:22:21.942493    4424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:22:21.947611    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:21.951961    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:22.001257    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:22:22.003241    4424 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:22:22.003241    4424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:22:22.006248    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:22.070295    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:22:22.425928    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:22:22.444230    4424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:22:22.451290    4424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:22:22.540661    4424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:22:24.151685    4424 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.7257338s)
	I1216 06:22:24.151837    4424 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:22:24.529871    4424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.0785053s)
	I1216 06:22:24.529983    4424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.0856125s)
	I1216 06:22:24.530029    4424 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.9893406s)
	I1216 06:22:24.535621    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:24.547997    4424 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:22:20.178315    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:20.202308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:20.231344    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.231344    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:20.236317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:20.279459    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.279459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:20.283465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:20.322463    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.322463    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:20.327465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:20.366466    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.366466    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:20.371478    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:20.409468    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.409468    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:20.413471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:20.447432    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.447432    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:20.451099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:20.486103    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.486103    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:20.490094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:20.530098    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.530098    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:20.530098    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:20.530098    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:20.557089    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:20.557089    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:20.606234    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:20.607239    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:20.667498    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:20.667498    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:20.703674    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:20.703674    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:20.796605    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.300916    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:23.324266    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:23.355598    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.355598    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:23.359141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:23.390554    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.390644    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:23.394340    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:23.423019    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.423019    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:23.426772    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:23.456953    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.457021    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:23.460762    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:23.491477    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.491477    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:23.495183    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:23.527107    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.527107    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:23.531577    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:23.559306    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.559306    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:23.563381    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:23.592615    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.592615    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:23.592615    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:23.592615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:23.630103    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:23.630103    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:23.719384    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.719514    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:23.719546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:23.746097    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:23.746097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:23.807727    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:23.807727    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:24.550004    4424 addons.go:530] duration metric: took 2.6879945s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:22:24.591996    4424 node_ready.go:35] waiting up to 15m0s for node "kubenet-030800" to be "Ready" ...
	I1216 06:22:24.646202    4424 node_ready.go:49] node "kubenet-030800" is "Ready"
	I1216 06:22:24.646202    4424 node_ready.go:38] duration metric: took 54.2051ms for node "kubenet-030800" to be "Ready" ...
	I1216 06:22:24.646202    4424 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:22:24.652200    4424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:24.721472    4424 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-030800" context rescaled to 1 replicas
	I1216 06:22:24.735392    4424 api_server.go:72] duration metric: took 2.87338s to wait for apiserver process to appear ...
	I1216 06:22:24.735392    4424 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:22:24.735392    4424 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56385/healthz ...
	I1216 06:22:24.821241    4424 api_server.go:279] https://127.0.0.1:56385/healthz returned 200:
	ok
	I1216 06:22:24.825583    4424 api_server.go:141] control plane version: v1.34.2
	I1216 06:22:24.825583    4424 api_server.go:131] duration metric: took 90.1899ms to wait for apiserver health ...
	I1216 06:22:24.825583    4424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:22:24.832936    4424 system_pods.go:59] 8 kube-system pods found
	I1216 06:22:24.833022    4424 system_pods.go:61] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.833022    4424 system_pods.go:61] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.833022    4424 system_pods.go:61] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:24.833022    4424 system_pods.go:61] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:24.833131    4424 system_pods.go:74] duration metric: took 7.4392ms to wait for pod list to return data ...
	I1216 06:22:24.833131    4424 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:22:24.838156    4424 default_sa.go:45] found service account: "default"
	I1216 06:22:24.838156    4424 default_sa.go:55] duration metric: took 5.0253ms for default service account to be created ...
	I1216 06:22:24.838156    4424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:22:24.844228    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:24.844228    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.844228    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.844228    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:24.844228    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:24.844228    4424 retry.go:31] will retry after 236.325715ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.105694    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.105749    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.105770    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.105770    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.105770    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.105848    4424 retry.go:31] will retry after 372.640753ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.532382    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.532482    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.532513    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.532513    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.532587    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.532611    4424 retry.go:31] will retry after 313.138834ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.853141    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.853661    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.853661    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.853661    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.853661    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.853715    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.853777    4424 retry.go:31] will retry after 472.942865ms: missing components: kube-dns, kube-proxy
	I1216 06:22:26.382913    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:26.404112    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:26.436722    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.436722    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:26.440749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:26.470877    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.470877    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:26.474941    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:26.503887    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.503950    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:26.508216    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:26.538317    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.538317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:26.542754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:26.571126    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.571189    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:26.574883    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:26.604762    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.604762    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:26.608705    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:26.637404    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.637444    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:26.641214    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:26.669720    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.669720    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:26.669720    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:26.669720    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:26.707289    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:26.707289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:26.791357    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:26.791357    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:26.791357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:26.817227    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:26.817227    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:26.865832    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:26.865832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.436231    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:29.459817    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:29.493134    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.493186    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:29.497118    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:29.526722    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.526722    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:29.531481    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:29.561672    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.561718    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:29.566882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:29.595896    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.595947    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:29.599655    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:29.628575    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.628661    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:29.632644    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:29.660164    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.660164    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:29.663829    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:29.694413    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.694413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:29.698152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:29.725286    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.725286    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:29.725355    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:29.725355    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.787721    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:29.787721    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:29.828376    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:29.828376    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:29.916249    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:29.916249    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:29.916249    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:29.942276    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:29.942276    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:26.336069    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:26.336069    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:26.336069    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:26.336069    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Running
	I1216 06:22:26.336069    4424 system_pods.go:126] duration metric: took 1.4978916s to wait for k8s-apps to be running ...
	I1216 06:22:26.336069    4424 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:22:26.342244    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:22:26.368294    4424 system_svc.go:56] duration metric: took 32.1861ms WaitForService to wait for kubelet
	I1216 06:22:26.368345    4424 kubeadm.go:587] duration metric: took 4.5062595s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:22:26.368345    4424 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:22:26.376647    4424 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:22:26.376691    4424 node_conditions.go:123] node cpu capacity is 16
	I1216 06:22:26.376745    4424 node_conditions.go:105] duration metric: took 8.3456ms to run NodePressure ...
	I1216 06:22:26.376745    4424 start.go:242] waiting for startup goroutines ...
	I1216 06:22:26.376745    4424 start.go:247] waiting for cluster config update ...
	I1216 06:22:26.376795    4424 start.go:256] writing updated cluster config ...
	I1216 06:22:26.382913    4424 ssh_runner.go:195] Run: rm -f paused
	I1216 06:22:26.391122    4424 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:26.399112    4424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:28.410987    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	W1216 06:22:30.912607    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	I1216 06:22:32.497361    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:32.517362    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:32.549841    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.549912    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:32.553592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:32.582070    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.582070    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:32.585068    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:32.612095    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.612095    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:32.615889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:32.644953    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.644953    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:32.649025    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:32.676348    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.676429    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:32.680134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:32.708040    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.708040    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:32.712034    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:32.745789    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.745789    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:32.752533    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:32.781449    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.781504    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:32.781504    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:32.781504    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:32.843135    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:32.843135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:32.881564    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:32.881564    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:32.982597    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:32.982597    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:32.982597    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:33.013212    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:33.013212    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:22:33.410898    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	W1216 06:22:35.912070    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	I1216 06:22:35.578218    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:35.601163    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:35.629786    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.629786    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:35.634440    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:35.663168    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.663168    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:35.667699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:35.699050    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.699050    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:35.703224    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:35.736149    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.736149    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:35.741542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:35.772450    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.772450    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:35.776692    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:35.804150    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.804150    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:35.808799    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:35.837871    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.837871    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:35.841100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:35.870769    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.870769    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:35.870769    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:35.870769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:35.934803    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:35.934803    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:35.973201    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:35.973201    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:36.070057    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:36.070057    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:36.070057    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:36.098690    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:36.098690    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:38.663786    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:38.688639    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:38.718646    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.718646    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:38.721640    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:38.751651    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.751651    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:38.754647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:38.784327    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.784327    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:38.788327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:38.815337    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.815337    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:38.818328    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:38.846331    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.846331    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:38.849339    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:38.880297    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.880297    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:38.884227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:38.917702    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.917702    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:38.920940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:38.964973    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.964973    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:38.964973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:38.964973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:38.999971    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:38.999971    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:39.102927    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:39.102927    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:39.102927    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:39.141934    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:39.141934    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:39.210081    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:39.210081    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:36.404625    4424 pod_ready.go:99] pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8qrgg" not found
	I1216 06:22:36.404625    4424 pod_ready.go:86] duration metric: took 10.0053735s for pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:36.404625    4424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7zmc" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:38.415310    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:40.417680    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:41.775031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:41.798710    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:41.831778    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.831778    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:41.835461    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:41.866411    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.866411    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:41.871544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:41.902486    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.902486    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:41.905907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:41.932887    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.932887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:41.935886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:41.965890    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.965890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:41.968887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:42.000893    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.000893    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:42.004876    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:42.043522    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.043591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:42.049149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:42.081678    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.081678    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:42.081678    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:42.081678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:42.140208    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:42.140208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:42.198197    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:42.198197    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:42.241586    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:42.241586    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:42.350617    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:42.350617    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:42.350617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:44.884303    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:44.902304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:44.933421    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.933421    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:44.938149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:44.974292    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.974334    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:44.977512    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	W1216 06:22:42.418518    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:44.914304    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:45.010620    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.010620    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:45.013618    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:45.047628    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.047628    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:45.050627    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:45.089756    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.089850    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:45.096356    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:45.137323    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.137323    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:45.141322    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:45.169330    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.170335    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:45.173321    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:45.202336    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.202336    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:45.202336    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:45.202336    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:45.227331    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:45.227331    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:45.275577    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:45.275630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:45.335206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:45.335206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:45.372222    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:45.372222    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:45.471935    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:47.976320    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:48.004505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:48.037430    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.037430    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:48.040437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:48.076428    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.076477    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:48.081194    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:48.118536    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.118536    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:48.124810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:48.153702    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.153702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:48.159558    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:48.187736    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.187736    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:48.192607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:48.225619    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.225619    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:48.229580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:48.260085    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.260085    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:48.263087    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:48.294313    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.294376    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:48.294376    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:48.294425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:48.345094    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:48.345094    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:48.423576    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:48.423576    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:48.459577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:48.459577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:48.548441    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:48.548441    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:48.548441    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:22:47.414818    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:49.417236    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:51.080561    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:51.104134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:51.132144    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.132144    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:51.136151    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:51.163962    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.163962    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:51.169361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:51.198404    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.198404    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:51.201253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:51.229899    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.229899    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:51.232895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:51.261881    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.261881    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:51.264887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:51.295306    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.295306    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:51.298763    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:51.331779    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.331850    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:51.337211    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:51.367502    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.367502    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:51.367502    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:51.367502    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:51.424226    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:51.424226    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:51.482475    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:51.482475    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:51.527426    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:51.527426    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:51.618444    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:51.618444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:51.618444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.148108    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:54.167190    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:54.198456    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.198456    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:54.202605    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:54.236901    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.236901    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:54.240906    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:54.272541    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.272541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:54.277008    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:54.312764    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.312764    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:54.317359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:54.347564    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.347564    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:54.350557    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:54.377557    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.377557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:54.381564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:54.411585    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.411585    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:54.415565    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:54.447567    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.447567    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:54.447567    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:54.447567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:54.483559    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:54.483559    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:54.589583    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:54.589583    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:54.589583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.617283    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:54.617349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:54.673906    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:54.673990    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 06:22:51.420194    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:53.916809    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:55.919718    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:58.419688    4424 pod_ready.go:94] pod "coredns-66bc5c9577-w7zmc" is "Ready"
	I1216 06:22:58.419688    4424 pod_ready.go:86] duration metric: took 22.0147573s for pod "coredns-66bc5c9577-w7zmc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.424677    4424 pod_ready.go:83] waiting for pod "etcd-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.432677    4424 pod_ready.go:94] pod "etcd-kubenet-030800" is "Ready"
	I1216 06:22:58.432677    4424 pod_ready.go:86] duration metric: took 7.9992ms for pod "etcd-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.435689    4424 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.459477    4424 pod_ready.go:94] pod "kube-apiserver-kubenet-030800" is "Ready"
	I1216 06:22:58.459477    4424 pod_ready.go:86] duration metric: took 22.793ms for pod "kube-apiserver-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.463834    4424 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.611617    4424 pod_ready.go:94] pod "kube-controller-manager-kubenet-030800" is "Ready"
	I1216 06:22:58.611617    4424 pod_ready.go:86] duration metric: took 147.7381ms for pod "kube-controller-manager-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.811398    4424 pod_ready.go:83] waiting for pod "kube-proxy-5b9l9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.211755    4424 pod_ready.go:94] pod "kube-proxy-5b9l9" is "Ready"
	I1216 06:22:59.211755    4424 pod_ready.go:86] duration metric: took 400.3513ms for pod "kube-proxy-5b9l9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.412761    4424 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.811735    4424 pod_ready.go:94] pod "kube-scheduler-kubenet-030800" is "Ready"
	I1216 06:22:59.811813    4424 pod_ready.go:86] duration metric: took 399.0464ms for pod "kube-scheduler-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.811850    4424 pod_ready.go:40] duration metric: took 33.4202632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:59.926671    4424 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:22:59.930035    4424 out.go:179] * Done! kubectl is now configured to use "kubenet-030800" cluster and "default" namespace by default
	I1216 06:22:57.250472    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:57.271468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:57.303800    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.303800    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:57.306801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:57.338803    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.338803    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:57.341800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:57.369018    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.369018    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:57.372806    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:57.403510    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.403510    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:57.406808    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:57.440995    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.440995    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:57.444225    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:57.475612    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.475612    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:57.479607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:57.509842    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.509842    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:57.513186    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:57.545981    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.545981    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:57.545981    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:57.545981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:57.636635    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:57.636635    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:57.636635    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:57.662639    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:57.662639    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:57.720464    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:57.720464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:57.782460    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:57.782460    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.324364    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:00.344368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:00.375358    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.375358    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:00.378355    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:00.410368    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.410368    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:00.414359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:00.442364    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.442364    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:00.446359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:00.476371    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.476371    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:00.479359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:00.508323    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.508323    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:00.512431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:00.550611    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.550611    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:00.553606    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:00.586336    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.586336    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:00.590552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:00.624129    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.624129    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:00.624129    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:00.624129    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:00.685547    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:00.685547    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.737417    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:00.737417    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:00.858025    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:00.858025    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:00.858025    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:00.886607    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:00.886607    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:03.463847    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:03.826614    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:03.881622    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.881622    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:03.887610    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:03.936557    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.937539    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:03.941562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:03.979542    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.979542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:03.983550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:04.020535    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.020535    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:04.025547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:04.064541    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.064541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:04.068548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:04.101538    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.101538    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:04.104544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:04.141752    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.141752    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:04.146757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:04.182755    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.182755    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:04.182755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:04.182755    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:04.305758    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:04.305758    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:04.356425    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:04.356425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:04.487429    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:04.487429    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:04.487429    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:04.526318    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:04.526362    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.087022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:07.110346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:07.137790    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.137790    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:07.141786    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:07.174601    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.174601    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:07.179419    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:07.211656    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.211656    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:07.216897    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:07.250459    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.250459    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:07.254048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:07.282207    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.282207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:07.285851    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:07.313925    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.313925    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:07.317529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:07.348851    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.348851    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:07.353083    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:07.381401    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.381401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:07.381401    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:07.381401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:07.408641    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:07.408641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.450935    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:07.450935    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:07.512733    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:07.512733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:07.552522    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:07.552522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:07.649624    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.155054    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:10.178201    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:10.207068    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.207068    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:10.210473    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:10.239652    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.239652    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:10.242766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:10.274887    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.274887    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:10.278519    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:10.308294    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.308351    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:10.312209    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:10.342572    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.342572    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:10.346437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:10.375569    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.375630    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:10.378861    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:10.405446    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.405446    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:10.410730    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:10.441244    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.441244    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:10.441244    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:10.441244    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:10.502753    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:10.502753    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:10.540437    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:10.540437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:10.626853    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.626853    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:10.626853    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:10.654987    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:10.655058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.213336    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:13.237358    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:13.266636    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.266721    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:13.270023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:13.297369    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.297434    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:13.300782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:13.336039    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.336039    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:13.341919    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:13.370523    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.370523    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:13.374455    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:13.404606    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.404606    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:13.408542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:13.437373    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.437431    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:13.441106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:13.470738    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.470738    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:13.474495    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:13.502203    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.502262    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:13.502262    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:13.502293    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.552578    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:13.552578    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:13.617499    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:13.617499    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:13.660047    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:13.660047    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:13.747316    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:13.747316    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:13.747316    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.284216    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:16.307907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:16.344535    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.344535    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:16.347847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:16.379001    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.379021    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:16.382292    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:16.413093    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.413116    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:16.418012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:16.456763    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.456826    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:16.460621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:16.491671    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.491693    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:16.495352    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:16.527862    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.527862    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:16.534704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:16.564194    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.564243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:16.570369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:16.601444    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.601444    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:16.601444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:16.601444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.631785    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:16.631785    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:16.675190    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:16.675190    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:16.737700    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:16.737700    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:16.775092    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:16.775092    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:16.865026    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.370669    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:19.393524    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:19.423405    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.423513    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:19.427307    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:19.459137    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.459238    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:19.462635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:19.493542    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.493542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:19.497334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:19.526496    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.526496    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:19.529949    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:19.559120    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.559120    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:19.562460    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:19.591305    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.591305    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:19.595794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:19.625200    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.626193    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:19.629187    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:19.657201    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.657201    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:19.657270    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:19.657270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:19.722496    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:19.722496    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:19.761161    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:19.761161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:19.852755    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.853756    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:19.853756    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:19.880330    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:19.881280    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.458668    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:22.483505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:22.514647    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.514647    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:22.518193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:22.551494    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.551494    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:22.555268    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:22.586119    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.586119    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:22.590107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:22.621733    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.621733    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:22.624739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:22.651728    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.651728    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:22.655725    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:22.687826    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.687826    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:22.692217    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:22.727413    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.727413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:22.731318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:22.769477    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.769477    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:22.770462    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:22.770462    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:22.795455    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:22.795455    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.851473    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:22.851473    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:22.911454    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:22.912459    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:22.948112    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:22.948112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:23.039238    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:25.544174    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:25.571784    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:25.610368    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.610422    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:25.615377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:25.651080    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.651129    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:25.655234    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:25.695942    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.695942    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:25.700548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:25.727743    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.727743    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:25.730739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:25.765620    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.765650    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:25.769261    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:25.805072    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.805127    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:25.810318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:25.840307    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.840307    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:25.844490    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:25.888279    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.888279    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:25.888279    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:25.888279    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:25.964206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:25.964206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:26.003275    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:26.003275    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:26.111485    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:26.111485    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:26.111485    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:26.146819    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:26.146819    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:28.694382    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:28.716947    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:28.753062    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.753062    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:28.756810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:28.789692    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.789692    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:28.794681    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:28.823690    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.823690    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:28.827683    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:28.858686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.858686    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:28.861688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:28.891686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.891686    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:28.894684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:28.923683    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.923683    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:28.926684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:28.958314    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.958314    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:28.962325    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:28.991317    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.991317    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:28.991317    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:28.991317    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:29.039348    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:29.039348    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:29.103117    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:29.103117    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:29.148003    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:29.148003    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:29.240448    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:29.240448    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:29.240448    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:31.772923    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:31.796203    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:31.827485    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.827485    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:31.830572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:31.873718    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.873718    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:31.877445    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:31.926391    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.926391    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:31.929391    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:31.964572    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.964572    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:31.968096    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:32.003776    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.003776    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:32.007175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:32.046322    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.046322    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:32.049283    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:32.077299    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.077299    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:32.080289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:32.114717    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.114793    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:32.114793    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:32.114843    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:32.191987    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:32.191987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:32.237143    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:32.237143    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:32.331899    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:32.331899    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:32.331899    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:32.362021    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:32.362021    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:34.918825    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:34.945647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:34.976745    8452 logs.go:282] 0 containers: []
	W1216 06:23:34.976745    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:34.980636    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:35.012295    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.012295    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:35.015295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:35.047289    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.047289    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:35.050289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:35.081492    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.081492    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:35.085580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:35.121645    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.121645    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:35.126840    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:35.167976    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.167976    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:35.170966    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:35.201969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.201969    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:35.204969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:35.232969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.233980    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:35.233980    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:35.233980    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:35.292973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:35.292973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:35.327973    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:35.327973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:35.420114    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:35.420114    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:35.420114    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:35.451148    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:35.451148    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:38.010056    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:38.035506    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:38.071853    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.071853    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:38.075564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:38.106543    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.106543    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:38.109547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:38.143669    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.143669    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:38.152737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:38.191923    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.191923    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:38.195575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:38.225935    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.225935    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:38.228939    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:38.268550    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.268550    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:38.271759    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:38.304387    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.304421    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:38.307849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:38.341968    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.341968    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:38.341968    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:38.341968    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:38.404267    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:38.404267    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:38.443104    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:38.443104    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:38.551474    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:38.551474    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:38.551474    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:38.582843    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:38.582869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.141896    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:41.185331    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:41.218961    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.219548    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:41.223789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:41.252376    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.252376    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:41.255368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:41.285378    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.285378    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:41.288369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:41.318383    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.318383    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:41.321372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:41.349373    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.349373    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:41.353377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:41.390105    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.390105    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:41.393103    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:41.425109    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.425109    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:41.428107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:41.462594    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.462594    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:41.462594    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:41.462594    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:41.492096    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:41.492156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.553755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:41.553806    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:41.622329    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:41.622329    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:41.664016    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:41.664016    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:41.759009    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:44.265223    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:44.286309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:44.319583    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.319583    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:44.324575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:44.358046    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.358114    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:44.361895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:44.390541    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.390541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:44.395354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:44.433163    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.433163    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:44.436754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:44.470605    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.470605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:44.475856    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:44.504412    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.504484    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:44.508013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:44.540170    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.540170    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:44.545802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:44.574593    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.575118    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:44.575181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:44.575181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:44.609181    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:44.609231    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:44.663988    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:44.663988    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:44.737678    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:44.737678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:44.777530    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:44.777530    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:44.868751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:47.373432    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:47.674375    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:47.705067    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.705067    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:47.709193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:47.739921    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.739921    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:47.743656    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:47.771970    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.771970    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:47.776451    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:47.808633    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.808633    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:47.813124    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:47.856079    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.856079    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:47.859452    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:47.891897    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.891897    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:47.895769    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:47.926050    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.926050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:47.929679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:47.962571    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.962571    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:47.962571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:47.962571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:48.026367    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:48.026367    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:48.063580    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:48.063580    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:48.173751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:48.173792    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:48.173792    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:48.199403    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:48.199403    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:50.750699    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:50.774573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:50.804983    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.804983    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:50.808894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:50.838533    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.838533    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:50.842242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:50.873377    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.873377    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:50.877508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:50.907646    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.907646    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:50.912410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:50.943853    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.943853    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:50.950275    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:50.977570    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.977570    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:50.982575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:51.010211    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.010211    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:51.014545    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:51.048584    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.048584    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:51.048584    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:51.048584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:51.112725    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:51.112725    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:51.150854    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:51.150854    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:51.246494    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:51.246535    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:51.246535    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:51.274873    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:51.274873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:53.832981    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:53.857995    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:53.892159    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.892159    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:53.895775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:53.926160    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.926160    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:53.929408    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:53.956482    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.956552    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:53.959711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:53.989788    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.989788    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:53.993230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:54.022506    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.022506    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:54.025409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:54.054974    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.054974    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:54.059372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:54.088015    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.088015    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:54.092123    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:54.121961    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.121961    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:54.121961    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:54.121961    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:54.169232    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:54.169295    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:54.230158    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:54.231156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:54.267713    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:54.267713    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:54.368006    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:54.368006    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:54.368006    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:56.899723    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:56.923149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:56.957635    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.957635    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:56.961499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:56.988363    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.988363    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:56.992371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:57.021993    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.021993    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:57.025544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:57.055718    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.055718    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:57.060969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:57.092456    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.092523    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:57.096418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:57.125588    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.125588    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:57.129665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:57.160663    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.160663    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:57.164518    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:57.196231    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.196281    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:57.196281    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:57.196281    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:57.258973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:57.258973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:57.302939    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:57.302939    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:57.397577    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:57.397577    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:57.397577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:57.434801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:57.434801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:59.991022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:00.014170    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:00.046529    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.046529    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:00.050903    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:00.080796    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.080796    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:00.084418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:00.114858    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.114858    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:00.121404    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:00.152596    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.152596    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:00.156447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:00.183532    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.183648    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:00.187074    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:00.218971    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.218971    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:00.222929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:00.252086    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.252086    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:00.256309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:00.285884    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.285884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:00.285884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:00.285884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:00.364208    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:00.364208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:00.403464    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:00.403464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:00.495864    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:00.495864    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:00.495864    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:00.521592    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:00.521592    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:03.070724    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:03.093858    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:03.127112    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.127112    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:03.131265    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:03.161262    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.161262    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:03.165073    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:03.195882    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.195933    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:03.200488    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:03.230205    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.230205    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:03.234193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:03.263580    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.263629    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:03.267410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:03.297599    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.297652    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:03.300957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:03.329666    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.329720    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:03.333378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:03.365184    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.365236    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:03.365282    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:03.365282    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:03.428385    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:03.428385    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:03.465984    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:03.465984    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:03.557873    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:03.559101    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:03.559101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:03.586791    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:03.586791    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:06.142562    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:06.170227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:06.202672    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.202672    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:06.206691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:06.237624    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.237624    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:06.241559    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:06.267616    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.267616    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:06.271709    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:06.304567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.304567    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:06.308556    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:06.337567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.337567    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:06.344744    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:06.373520    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.373520    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:06.377184    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:06.411936    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.411936    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:06.415789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:06.447263    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.447263    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:06.447263    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:06.447263    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:06.509097    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:06.509097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:06.546188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:06.546188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:06.639923    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:06.639923    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:06.639923    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:06.666485    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:06.666519    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.221249    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:09.244788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:09.276490    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.276490    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:09.280706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:09.309520    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.309520    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:09.313105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:09.339092    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.339092    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:09.343484    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:09.369046    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.369046    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:09.373188    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:09.403810    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.403810    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:09.407108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:09.437156    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.437156    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:09.441754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:09.469752    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.469810    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:09.473378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:09.503754    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.503754    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:09.503754    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:09.503754    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:09.533645    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:09.533718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.587529    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:09.587529    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:09.647801    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:09.647801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:09.686577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:09.686577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:09.782674    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.288199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:12.313967    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:12.344043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.344043    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:12.348347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:12.378683    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.378683    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:12.382106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:12.411599    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.411599    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:12.415131    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:12.445826    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.445873    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:12.450940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:12.481043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.481078    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:12.484800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:12.512969    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.512990    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:12.515915    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:12.548151    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.548228    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:12.551706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:12.584039    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.584039    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:12.584039    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:12.584039    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:12.646680    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:12.646680    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:12.686545    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:12.686545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:12.804767    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.804767    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:12.804767    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:12.831866    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:12.831866    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:15.392415    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:15.416435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:15.445044    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.445044    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:15.449260    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:15.476688    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.476688    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:15.481012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:15.508866    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.508928    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:15.512662    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:15.541002    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.541002    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:15.545363    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:15.574947    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.574991    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:15.578407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:15.604751    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.604751    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:15.608699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:15.639261    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.639338    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:15.642317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:15.674404    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.674404    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:15.674404    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:15.674404    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:15.736218    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:15.736218    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:15.774188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:15.774188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:15.862546    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:15.862546    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:15.862546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:15.888115    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:15.888115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.441031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:18.465207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:18.495447    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.495481    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:18.498929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:18.528412    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.528476    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:18.531543    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:18.560175    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.560175    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:18.563996    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:18.592824    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.592894    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:18.596175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:18.623746    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.623746    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:18.627099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:18.652978    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.653013    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:18.656407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:18.683637    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.683686    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:18.687125    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:18.716903    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.716942    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:18.716964    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:18.716981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:18.743123    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:18.743675    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.794891    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:18.794891    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:18.858345    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:18.858345    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:18.894242    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:18.894242    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:18.979844    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:21.485585    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:21.510290    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:21.539823    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.539823    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:21.543159    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:21.575241    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.575241    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:21.579330    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:21.607389    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.607490    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:21.611023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:21.642332    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.642332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:21.645973    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:21.671339    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.671390    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:21.675048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:21.704483    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.704483    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:21.708499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:21.734944    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.735027    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:21.738688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:21.768890    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.768890    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:21.768987    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:21.768987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:21.800297    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:21.800344    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:21.854571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:21.854571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:21.921230    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:21.921230    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:21.961787    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:21.961787    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:22.060842    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.566957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:24.591909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:24.624010    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.624010    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:24.627550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:24.657938    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.657938    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:24.661917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:24.688848    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.688848    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:24.692388    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:24.722130    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.722165    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:24.725802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:24.754067    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.754134    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:24.757294    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:24.783522    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.783595    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:24.787022    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:24.818531    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.818531    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:24.822200    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:24.851316    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.851371    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:24.851391    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:24.851391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:24.940030    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.941511    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:24.941511    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:24.967127    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:24.967127    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:25.018271    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:25.018358    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:25.077769    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:25.077769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:27.621222    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:27.644179    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:27.675033    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.675033    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:27.678724    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:27.707945    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.707945    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:27.712443    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:27.740635    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.740635    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:27.744539    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:27.775332    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.775332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:27.779621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:27.807884    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.807884    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:27.812207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:27.843877    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.843877    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:27.850126    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:27.878365    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.878365    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:27.883323    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:27.911733    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.911733    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:27.911733    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:27.911733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:27.975085    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:27.975085    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:28.011926    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:28.011926    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:28.117959    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:28.117959    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:28.117959    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:28.146135    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:28.146135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:30.702904    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:30.732783    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:30.768726    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.768726    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:30.772432    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:30.804888    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.804888    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:30.809005    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:30.839403    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.839403    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:30.843668    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:30.874013    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.874013    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:30.878013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:30.906934    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.906934    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:30.911178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:30.936942    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.936942    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:30.940954    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:30.967843    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.967843    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:30.973798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:31.000614    8452 logs.go:282] 0 containers: []
	W1216 06:24:31.000614    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:31.000614    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:31.000614    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:31.063545    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:31.063545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:31.101704    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:31.101704    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:31.201356    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:31.201356    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:31.201356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:31.229634    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:31.229634    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:33.780745    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:33.805148    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:33.836319    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.836319    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:33.840094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:33.872138    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.872167    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:33.875487    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:33.908318    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.908318    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:33.912197    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:33.940179    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.940223    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:33.944152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:33.974912    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.974912    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:33.978728    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:34.004557    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.004557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:34.008971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:34.037591    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.037591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:34.041537    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:34.073153    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.073153    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:34.073153    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:34.073153    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:34.139585    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:34.139585    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:34.177888    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:34.177888    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:34.273589    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:34.273589    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:34.273589    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:34.298805    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:34.298805    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:36.851957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:36.889887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:36.919682    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.919682    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:36.923468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:36.953008    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.953073    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:36.957253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:36.985770    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.985770    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:36.989059    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:37.015702    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.015702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:37.019508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:37.046311    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.046351    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:37.050327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:37.087936    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.087936    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:37.092175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:37.121271    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.121271    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:37.125767    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:37.153753    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.153814    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:37.153814    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:37.153869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:37.218058    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:37.218058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:37.256162    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:37.257161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:37.349292    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:37.349292    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:37.349292    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:37.378861    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:37.379384    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:39.931797    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:39.956069    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:39.991154    8452 logs.go:282] 0 containers: []
	W1216 06:24:39.991154    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:39.994809    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:40.021488    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.021488    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:40.025604    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:40.055464    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.055464    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:40.059576    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:40.085410    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.086402    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:40.090048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:40.120389    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.120389    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:40.125766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:40.159925    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.159962    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:40.163912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:40.190820    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.190820    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:40.194350    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:40.223821    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.223886    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:40.223886    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:40.223886    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:40.292033    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:40.292033    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:40.331274    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:40.331274    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:40.423708    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:40.423708    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:40.423708    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:40.452101    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:40.452101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.005925    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:43.029165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:43.060601    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.060601    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:43.064304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:43.092446    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.092446    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:43.096552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:43.127295    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.127347    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:43.130913    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:43.159919    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.159986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:43.163049    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:43.190310    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.190384    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:43.194093    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:43.223641    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.223641    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:43.227270    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:43.254592    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.254592    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:43.259912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:43.293166    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.293166    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:43.293166    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:43.293166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:43.328685    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:43.328685    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:43.412970    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:43.413012    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:43.413042    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:43.444573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:43.444573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.501857    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:43.501857    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.068154    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:46.095291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:46.125740    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.125740    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:46.131016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:46.160926    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.160926    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:46.164909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:46.192634    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.192634    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:46.196346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:46.224203    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.224203    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:46.228650    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:46.255541    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.255541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:46.259732    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:46.289377    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.289377    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:46.293566    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:46.321342    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.321342    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:46.325492    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:46.352311    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.352342    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:46.352342    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:46.352382    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.416761    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:46.416761    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:46.469641    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:46.469641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:46.580672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:46.581191    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:46.581229    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:46.608166    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:46.608166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:49.162834    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:49.187402    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:49.219893    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.219893    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:49.223424    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:49.252338    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.252338    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:49.255900    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:49.286106    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.286131    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:49.289776    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:49.317141    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.317141    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:49.322761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:49.353605    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.353605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:49.357674    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:49.385747    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.385793    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:49.388757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:49.417812    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.417812    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:49.421500    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:49.452746    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.452797    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:49.452797    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:49.452797    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:49.516432    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:49.516432    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:49.553647    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:49.553647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:49.647049    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:49.647087    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:49.647087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:49.671889    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:49.671889    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:52.224199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:52.248067    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:52.282412    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.282412    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:52.286308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:52.315955    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.315955    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:52.319894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:52.353188    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.353188    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:52.356528    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:52.387579    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.387579    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:52.392336    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:52.421909    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.421909    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:52.425890    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:52.458902    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.458902    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:52.462430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:52.498067    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.498140    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:52.501354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:52.528125    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.528125    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:52.528125    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:52.528125    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:52.593845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:52.593845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:52.632779    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:52.632779    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:52.732902    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:52.732902    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:52.732902    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:52.762437    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:52.762437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.328400    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:55.355014    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:55.387364    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.387364    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:55.391085    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:55.417341    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.417341    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:55.421141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:55.450785    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.450785    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:55.454454    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:55.482484    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.482484    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:55.486100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:55.513682    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.513682    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:55.517291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:55.548548    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.548548    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:55.552971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:55.583380    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.583380    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:55.587471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:55.618619    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.618619    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:55.618619    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:55.618686    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:55.646962    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:55.646962    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.695480    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:55.695480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:55.757470    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:55.757470    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:55.796071    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:55.796071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:55.889833    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.396122    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:58.423573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:58.454757    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.454757    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:58.460430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:58.490597    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.490597    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:58.493832    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:58.523149    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.523149    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:58.526960    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:58.558649    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.558649    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:58.562228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:58.591400    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.591400    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:58.595569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:58.624162    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.624162    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:58.628070    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:58.660578    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.660578    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:58.664236    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:58.693155    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.693155    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:58.693155    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:58.693155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:58.732408    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:58.733409    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:58.823465    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.823465    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:58.823465    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:58.848772    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:58.848772    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:58.900567    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:58.900567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.465828    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:01.490385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:01.520316    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.520316    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:01.524299    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:01.555350    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.555350    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:01.559239    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:01.587077    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.587077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:01.591421    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:01.623853    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.623853    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:01.627746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:01.658165    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.658165    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:01.661588    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:01.703310    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.703310    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:01.709361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:01.740903    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.740903    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:01.744287    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:01.773431    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.773431    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:01.773431    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:01.773431    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:01.863541    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:01.863541    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:01.863541    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:01.891816    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:01.891816    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:01.936351    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:01.936351    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.997563    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:01.997563    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.541470    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:04.565886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:04.595881    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.595881    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:04.599716    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:04.629724    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.629749    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:04.633814    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:04.666020    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.666047    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:04.669510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:04.699730    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.699730    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:04.704016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:04.734540    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.734540    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:04.738414    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:04.765651    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.765651    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:04.769397    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:04.797315    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.797315    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:04.801409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:04.832845    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.832845    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:04.832845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:04.832845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.869617    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:04.869617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:04.958334    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:04.958334    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:04.958334    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:04.983497    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:04.983497    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:05.037861    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:05.037887    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.603239    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:07.626775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:07.655146    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.655146    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:07.658648    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:07.688192    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.688227    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:07.691749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:07.723836    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.723836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:07.727536    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:07.761238    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.761238    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:07.764987    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:07.792890    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.792890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:07.796847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:07.824734    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.824734    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:07.828821    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:07.859399    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.859399    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:07.862780    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:07.893406    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.893406    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:07.893457    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:07.893480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.954656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:07.954656    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:07.992200    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:07.993203    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:08.077979    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:08.077979    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:08.077979    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:08.102718    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:08.102718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:10.662101    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:10.688889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:10.721934    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.721996    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:10.727012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:10.760697    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.760746    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:10.763961    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:10.791222    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.791293    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:10.795121    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:10.826239    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.826317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:10.829753    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:10.857355    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.857355    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:10.861145    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:10.903922    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.903922    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:10.907990    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:10.937216    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.937281    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:10.940707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:10.969086    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.969086    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:10.969086    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:10.969238    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:11.062109    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:11.062109    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:11.062109    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:11.090185    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:11.090185    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:11.141444    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:11.141444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:11.199181    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:11.199181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:13.741347    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:13.766441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:13.800424    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.800424    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:13.805169    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:13.835040    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.835040    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:13.839295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:13.864861    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.866077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:13.869598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:13.898887    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.898887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:13.903167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:13.931208    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.931208    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:13.936649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:13.963722    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.963722    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:13.967474    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:13.998640    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.998640    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:14.002572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:14.031349    8452 logs.go:282] 0 containers: []
	W1216 06:25:14.031401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:14.031401    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:14.031401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:14.124587    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:14.124587    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:14.124714    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:14.153583    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:14.153583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:14.202636    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:14.202636    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:14.260591    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:14.260591    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:16.808603    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:16.833787    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:16.864300    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.864300    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:16.868592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:16.897549    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.897549    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:16.900917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:16.931516    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.931557    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:16.936698    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:16.965053    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.965053    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:16.969015    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:16.997017    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.997017    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:17.000551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:17.028733    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.028733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:17.032830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:17.062242    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.062242    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:17.066193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:17.096111    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.096186    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:17.096186    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:17.096243    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:17.126801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:17.126801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:17.178392    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:17.178392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:17.239223    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:17.239223    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:17.276363    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:17.277364    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:17.362910    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:19.869062    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:19.894371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:19.924915    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.924915    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:19.929351    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:19.956535    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.956535    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:19.960534    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:19.989334    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.989334    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:19.993202    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:20.021108    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.021108    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:20.025230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:20.054251    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.054251    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:20.057788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:20.088787    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.088860    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:20.092250    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:20.120577    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.120577    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:20.123857    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:20.153015    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.153015    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:20.153015    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:20.153015    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:20.241391    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:20.241391    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:20.241391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:20.267492    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:20.267554    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:20.321240    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:20.321880    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:20.384978    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:20.384978    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:22.926087    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:22.949774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:22.982854    8452 logs.go:282] 0 containers: []
	W1216 06:25:22.982854    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:22.986923    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:23.017638    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.017638    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:23.021130    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:23.052442    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.052667    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:23.058175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:23.085210    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.085210    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:23.089664    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:23.120747    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.120795    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:23.124581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:23.150600    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.150600    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:23.154602    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:23.182147    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.182147    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:23.185649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:23.217087    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.217087    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:23.217087    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:23.217087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:23.280619    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:23.280619    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:23.318090    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:23.318090    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:23.406270    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:23.406270    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:23.406270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:23.435128    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:23.435128    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:25.989934    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:26.012706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:26.043141    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.043141    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:26.047435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:26.075985    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.075985    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:26.079830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:26.110575    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.110575    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:26.113774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:26.144668    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.144668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:26.148428    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:26.175392    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.175392    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:26.179120    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:26.211067    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.211067    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:26.215072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:26.243555    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.243586    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:26.246934    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:26.279876    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.279876    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:26.279876    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:26.279876    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:26.387447    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:26.387488    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:26.387537    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:26.413896    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:26.413896    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:26.462318    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:26.462318    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:26.527832    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:26.527832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.072565    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:29.096390    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:29.127989    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.127989    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:29.131385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:29.158741    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.158741    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:29.162538    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:29.190346    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.190346    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:29.193798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:29.222234    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.222234    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:29.225740    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:29.252553    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.252553    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:29.256489    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:29.285679    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.285733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:29.289742    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:29.320841    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.321050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:29.324841    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:29.352461    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.352587    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:29.352615    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:29.352615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:29.419045    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:29.419045    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.457659    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:29.457659    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:29.544155    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:29.544155    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:29.544155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:29.571612    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:29.571646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:32.139910    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:32.164438    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:32.196526    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.196526    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:32.200231    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:32.226279    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.226279    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:32.230146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:32.257831    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.257831    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:32.262665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:32.293641    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.293641    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:32.297746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:32.327055    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.327055    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:32.331274    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:32.362206    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.362206    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:32.365146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:32.394600    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.394600    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:32.400058    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:32.428075    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.428075    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:32.428075    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:32.428075    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:32.491661    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:32.491661    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:32.528847    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:32.528847    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:32.616464    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:32.616464    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:32.616464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:32.642397    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:32.642397    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:35.191852    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:35.225285    8452 out.go:203] 
	W1216 06:25:35.227244    8452 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1216 06:25:35.227244    8452 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1216 06:25:35.227244    8452 out.go:285] * Related issues:
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1216 06:25:35.230096    8452 out.go:203] 
	
	
	==> Docker <==
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570336952Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570433565Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570447467Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570465470Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570473171Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570498774Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570539380Z" level=info msg="Initializing buildkit"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.671982027Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680146533Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680337859Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680374664Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680404268Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:16:00 no-preload-686300 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:16:01 no-preload-686300 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:16:01 no-preload-686300 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:31:09.668195   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:31:09.669156   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:31:09.672314   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:31:09.674771   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:31:09.676045   17541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.633501] CPU: 10 PID: 466820 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f865800db20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f865800daf6.
	[  +0.000001] RSP: 002b:00007ffc8c624780 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000033] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.839091] CPU: 12 PID: 466960 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fa6af131b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fa6af131af6.
	[  +0.000001] RSP: 002b:00007ffe97387e50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 06:22] tmpfs: Unknown parameter 'noswap'
	[  +9.428310] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 06:31:09 up  2:07,  0 user,  load average: 0.32, 1.33, 2.82
	Linux no-preload-686300 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:31:06 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:31:06 no-preload-686300 kubelet[17349]: E1216 06:31:06.819646   17349 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:31:06 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:31:06 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:31:07 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 16 06:31:07 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:31:07 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:31:07 no-preload-686300 kubelet[17365]: E1216 06:31:07.569831   17365 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:31:07 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:31:07 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:31:08 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1206.
	Dec 16 06:31:08 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:31:08 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:31:08 no-preload-686300 kubelet[17402]: E1216 06:31:08.331661   17402 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:31:08 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:31:08 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:31:08 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1207.
	Dec 16 06:31:08 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:31:08 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:31:09 no-preload-686300 kubelet[17419]: E1216 06:31:09.080378   17419 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:31:09 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:31:09 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:31:09 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1208.
	Dec 16 06:31:09 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:31:09 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300: exit status 2 (592.1382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-686300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-256200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200: exit status 2 (568.943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-256200 -n newest-cni-256200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-256200 -n newest-cni-256200: exit status 2 (581.5725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-256200 --alsologtostderr -v=1
E1216 06:25:44.171016   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200: exit status 2 (589.4411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-256200 -n newest-cni-256200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-256200 -n newest-cni-256200: exit status 2 (574.8012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-256200
helpers_test.go:244: (dbg) docker inspect newest-cni-256200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66",
	        "Created": "2025-12-16T06:09:14.512792797Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 436653,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:19:21.496573864Z",
	            "FinishedAt": "2025-12-16T06:19:16.313765237Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/hostname",
	        "HostsPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/hosts",
	        "LogPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66-json.log",
	        "Name": "/newest-cni-256200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-256200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-256200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-256200",
	                "Source": "/var/lib/docker/volumes/newest-cni-256200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-256200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-256200",
	                "name.minikube.sigs.k8s.io": "newest-cni-256200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e8e6d675d034626362ba9bfe3ff7eb692b71509157c5f340d1ebcb47d8e5bca3",
	            "SandboxKey": "/var/run/docker/netns/e8e6d675d034",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55872"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55868"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55869"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55870"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55871"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-256200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c97a08422fb6ea0a0f62c56d96c89be84aa4e33beba1ccaa82b7390e64b42c8e",
	                    "EndpointID": "fd51517b1d43bd1aa0aedcd49011763e39b0ec0911fbe06e3e82710415d585b2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-256200",
	                        "144d2cf5befb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200: exit status 2 (556.329ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-256200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-256200 logs -n 25: (1.4590683s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-030800 sudo journalctl -xeu kubelet --all --full --no-pager          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/kubernetes/kubelet.conf                         │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status docker --all --full --no-pager          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat docker --no-pager                          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/docker/daemon.json                              │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo docker system info                                       │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat cri-docker --no-pager                      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cri-dockerd --version                                    │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status containerd --all --full --no-pager      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat containerd --no-pager                      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /lib/systemd/system/containerd.service               │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/containerd/config.toml                          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo containerd config dump                                   │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status crio --all --full --no-pager            │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat crio --no-pager                            │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo crio config                                              │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete  │ -p kubenet-030800                                                               │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image   │ newest-cni-256200 image list --format=json                                      │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ pause   │ -p newest-cni-256200 --alsologtostderr -v=1                                     │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ unpause │ -p newest-cni-256200 --alsologtostderr -v=1                                     │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:21:31
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:21:31.068463    4424 out.go:360] Setting OutFile to fd 1300 ...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.112163    4424 out.go:374] Setting ErrFile to fd 1224...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.126168    4424 out.go:368] Setting JSON to false
	I1216 06:21:31.128157    4424 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7112,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:21:31.129155    4424 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:21:31.133155    4424 out.go:179] * [kubenet-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:21:31.136368    4424 notify.go:221] Checking for updates...
	I1216 06:21:31.137751    4424 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:31.140914    4424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:21:31.143313    4424 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:21:31.145626    4424 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:21:31.147629    4424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:21:31.150478    4424 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151727    4424 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:21:31.272417    4424 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:21:31.275875    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.534539    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.516919297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.537553    4424 out.go:179] * Using the docker driver based on user configuration
	I1216 06:21:31.541211    4424 start.go:309] selected driver: docker
	I1216 06:21:31.541254    4424 start.go:927] validating driver "docker" against <nil>
	I1216 06:21:31.541286    4424 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:21:31.597589    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.842240    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.823958826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.842240    4424 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:21:31.843240    4424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:31.846236    4424 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:21:31.848222    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:21:31.848222    4424 start.go:353] cluster config:
	{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:21:31.851222    4424 out.go:179] * Starting "kubenet-030800" primary control-plane node in "kubenet-030800" cluster
	I1216 06:21:31.860233    4424 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:21:31.863229    4424 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:21:31.866228    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:31.866228    4424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:21:31.866228    4424 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:21:31.866228    4424 cache.go:65] Caching tarball of preloaded images
	I1216 06:21:31.866228    4424 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:21:31.866228    4424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:21:31.866228    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:31.866228    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json: {Name:mkd9bbe5249f898d86f7b7f3961735d2ed71d636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:31.935458    4424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:21:31.935458    4424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:21:31.935988    4424 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:21:31.936042    4424 start.go:360] acquireMachinesLock for kubenet-030800: {Name:mka6ae821c9ad8ee62e1a8eef0f2acffe81ebe64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:21:31.936202    4424 start.go:364] duration metric: took 160.2µs to acquireMachinesLock for "kubenet-030800"
	I1216 06:21:31.936352    4424 start.go:93] Provisioning new machine with config: &{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:31.936477    4424 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:21:30.055522    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:30.078529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:30.109519    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.109519    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:30.112511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:30.144765    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.144765    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:30.149313    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:30.181852    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.181852    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:30.184838    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:30.217464    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.217504    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:30.221332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:30.249301    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.249301    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:30.252441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:30.278998    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.278998    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:30.281997    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:30.312414    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.312414    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:30.318242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:30.357304    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.357361    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:30.357422    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:30.357422    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:30.746929    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:30.746929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:30.795665    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:30.795746    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:30.878691    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:30.878691    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:30.913679    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:30.913679    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:31.010602    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:33.516463    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:33.569431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:33.607164    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.607164    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:33.610177    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:33.645795    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.645795    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:33.649795    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:33.678786    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.678786    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:33.681789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:33.712090    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.712090    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:33.716091    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:33.749207    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.749207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:33.753072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:33.787281    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.787317    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:33.790849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:33.822451    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.822451    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:33.828957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:33.859827    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.859881    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:33.859881    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:33.859929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:33.922584    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:33.922584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:34.010640    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:34.010640    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:34.010640    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:34.047937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:34.047937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:34.108035    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:34.108113    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:31.939854    4424 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:21:31.939854    4424 start.go:159] libmachine.API.Create for "kubenet-030800" (driver="docker")
	I1216 06:21:31.939854    4424 client.go:173] LocalClient.Create starting
	I1216 06:21:31.940866    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.946190    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:21:32.002258    4424 cli_runner.go:211] docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:21:32.006251    4424 network_create.go:284] running [docker network inspect kubenet-030800] to gather additional debugging logs...
	I1216 06:21:32.006251    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800
	W1216 06:21:32.057274    4424 cli_runner.go:211] docker network inspect kubenet-030800 returned with exit code 1
	I1216 06:21:32.057274    4424 network_create.go:287] error running [docker network inspect kubenet-030800]: docker network inspect kubenet-030800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-030800 not found
	I1216 06:21:32.057274    4424 network_create.go:289] output of [docker network inspect kubenet-030800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-030800 not found
	
	** /stderr **
	I1216 06:21:32.061267    4424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:21:32.137401    4424 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.168856    4424 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.184860    4424 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.200856    4424 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.216426    4424 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.232146    4424 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016d96b0}
	I1216 06:21:32.232146    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 06:21:32.235443    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	W1216 06:21:32.288644    4424 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800 returned with exit code 1
	W1216 06:21:32.288644    4424 network_create.go:149] failed to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1216 06:21:32.288644    4424 network_create.go:116] failed to create docker network kubenet-030800 192.168.94.0/24, will retry: subnet is taken
	I1216 06:21:32.308048    4424 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.321168    4424 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015f57d0}
	I1216 06:21:32.321265    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1216 06:21:32.325637    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	I1216 06:21:32.469323    4424 network_create.go:108] docker network kubenet-030800 192.168.103.0/24 created
	I1216 06:21:32.469323    4424 kic.go:121] calculated static IP "192.168.103.2" for the "kubenet-030800" container
	I1216 06:21:32.483125    4424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:21:32.541557    4424 cli_runner.go:164] Run: docker volume create kubenet-030800 --label name.minikube.sigs.k8s.io=kubenet-030800 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:21:32.608360    4424 oci.go:103] Successfully created a docker volume kubenet-030800
	I1216 06:21:32.611360    4424 cli_runner.go:164] Run: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:21:34.117036    4424 cli_runner.go:217] Completed: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.5056549s)
	I1216 06:21:34.117036    4424 oci.go:107] Successfully prepared a docker volume kubenet-030800
	I1216 06:21:34.117036    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:34.117036    4424 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:21:34.121793    4424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:21:37.760556    7800 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:21:37.760556    7800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:21:37.761189    7800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:21:37.761753    7800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:21:37.761881    7800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:21:37.761881    7800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:21:37.764442    7800 out.go:252]   - Generating certificates and keys ...
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:21:37.765188    7800 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:21:37.765955    7800 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:21:37.766018    7800 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:21:37.766124    7800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:21:37.766165    7800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:21:37.766271    7800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:21:37.766333    7800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:21:37.766397    7800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:21:37.766458    7800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:21:37.770151    7800 out.go:252]   - Booting up control plane ...
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:21:37.770817    7800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:21:37.770952    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:21:37.771091    7800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:21:37.771167    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:21:37.771225    7800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:21:37.771366    7800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.004327208s
	I1216 06:21:37.771902    7800 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:21:37.772247    7800 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1216 06:21:37.772484    7800 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:21:37.772735    7800 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:21:37.773067    7800 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.101943404s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.591910767s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002177662s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:21:37.773799    7800 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:21:37.773799    7800 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:21:37.774455    7800 kubeadm.go:319] [mark-control-plane] Marking the node bridge-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:21:37.774523    7800 kubeadm.go:319] [bootstrap-token] Using token: lrkd8c.ky3vlqagn7chac73
	I1216 06:21:37.777890    7800 out.go:252]   - Configuring RBAC rules ...
	I1216 06:21:37.777890    7800 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:21:37.779666    7800 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:21:37.780278    7800 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:21:37.780278    7800 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:21:37.781243    7800 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--control-plane 
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:21:37.782257    7800 cni.go:84] Creating CNI manager for "bridge"
	I1216 06:21:37.785969    7800 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W1216 06:21:35.013402    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:37.791788    7800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 06:21:37.806804    7800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 06:21:37.825807    7800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-030800 minikube.k8s.io/updated_at=2025_12_16T06_21_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=bridge-030800 minikube.k8s.io/primary=true
	I1216 06:21:37.839814    7800 ops.go:34] apiserver oom_adj: -16
	I1216 06:21:38.032186    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:38.534048    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.035804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.534294    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:36.686452    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:36.704466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:36.733394    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.733394    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:36.737348    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:36.773510    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.773510    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:36.776509    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:36.805498    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.805498    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:36.809501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:36.845499    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.845499    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:36.849511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:36.879646    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.879646    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:36.884108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:36.915408    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.915408    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:36.920279    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:36.952754    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.952754    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:36.955734    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:36.987884    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.987884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:36.987884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:36.987884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:37.053646    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:37.053646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:37.093881    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:37.093881    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:37.179527    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:37.179584    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:37.179628    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:37.206261    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:37.206297    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:39.778747    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:39.802598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:39.832179    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.832179    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:39.836704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:39.869121    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.869121    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:39.873774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:39.909668    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.909668    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:39.914691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:39.947830    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.947830    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:39.951594    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:40.034177    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:40.535099    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.034558    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.535126    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.034691    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.533593    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.035143    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.831113    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:44.554108    7800 kubeadm.go:1114] duration metric: took 6.7282073s to wait for elevateKubeSystemPrivileges
	I1216 06:21:44.554108    7800 kubeadm.go:403] duration metric: took 23.3439157s to StartCluster
	I1216 06:21:44.554108    7800 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.554108    7800 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:44.555899    7800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.557179    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:21:44.557179    7800 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:44.557179    7800 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:21:44.557179    7800 addons.go:70] Setting storage-provisioner=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:239] Setting addon storage-provisioner=true in "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:70] Setting default-storageclass=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 host.go:66] Checking if "bridge-030800" exists ...
	I1216 06:21:44.557179    7800 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-030800"
	I1216 06:21:44.557179    7800 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.910438    7800 out.go:179] * Verifying Kubernetes components...
	I1216 06:21:39.982557    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.982557    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:39.986642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:40.018169    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.018169    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:40.021165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:40.051243    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.051243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:40.057090    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:40.084414    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.084414    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:40.084414    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:40.084414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:40.144414    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:40.144414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:40.179632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:40.179632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:40.269800    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:40.270323    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:40.270323    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:40.298399    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:40.298399    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:42.860131    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:42.883733    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:42.914204    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.914204    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:42.918228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:42.950380    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.950459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:42.954335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:42.985562    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.985562    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:42.988741    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:43.016773    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.016773    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:43.020957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:43.059042    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.059042    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:43.062546    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:43.091529    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.091529    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:43.095547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:43.122188    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.122188    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:43.126264    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:43.154603    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.154667    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:43.154687    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:43.154709    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:43.217437    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:43.217437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:43.256550    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:43.256550    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:43.334672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:43.334672    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:43.334672    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:43.363324    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:43.363324    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:44.625758    7800 addons.go:239] Setting addon default-storageclass=true in "bridge-030800"
	I1216 06:21:44.961765    7800 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:21:44.962159    7800 host.go:66] Checking if "bridge-030800" exists ...
	W1216 06:21:45.051007    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:45.413866    7800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:45.416342    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:45.428762    7800 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.428762    7800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:21:45.433231    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.481472    7800 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:45.481472    7800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:21:45.485567    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.487870    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.534738    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:21:45.540734    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.651776    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.743561    7800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:21:45.947134    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:48.661269    7800 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.1264885s)
	I1216 06:21:48.661269    7800 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.2776091s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.1858261s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.9822555s)
	I1216 06:21:48.933443    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:48.974829    7800 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:21:48.977844    7800 addons.go:530] duration metric: took 4.4206041s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:21:48.994296    7800 node_ready.go:35] waiting up to 15m0s for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 node_ready.go:49] node "bridge-030800" is "Ready"
	I1216 06:21:49.024312    7800 node_ready.go:38] duration metric: took 30.0163ms for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:21:49.030307    7800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.051593    7800 api_server.go:72] duration metric: took 4.4943521s to wait for apiserver process to appear ...
	I1216 06:21:49.051593    7800 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:21:49.051593    7800 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56268/healthz ...
	I1216 06:21:49.061499    7800 api_server.go:279] https://127.0.0.1:56268/healthz returned 200:
	ok
	I1216 06:21:49.063514    7800 api_server.go:141] control plane version: v1.34.2
	I1216 06:21:49.063514    7800 api_server.go:131] duration metric: took 11.9204ms to wait for apiserver health ...
	I1216 06:21:49.064510    7800 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:21:49.088115    7800 system_pods.go:59] 8 kube-system pods found
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.088115    7800 system_pods.go:74] duration metric: took 23.6038ms to wait for pod list to return data ...
	I1216 06:21:49.088115    7800 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:21:49.094110    7800 default_sa.go:45] found service account: "default"
	I1216 06:21:49.094110    7800 default_sa.go:55] duration metric: took 5.9949ms for default service account to be created ...
	I1216 06:21:49.094110    7800 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:21:49.100097    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.100097    7800 retry.go:31] will retry after 202.33386ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.170358    7800 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-030800" context rescaled to 1 replicas
	I1216 06:21:49.310950    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.310950    7800 retry.go:31] will retry after 302.122926ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.630338    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630577    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.630663    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.630695    7800 retry.go:31] will retry after 447.973015ms: missing components: kube-dns, kube-proxy
	I1216 06:21:45.929428    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:45.950755    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:45.982605    8452 logs.go:282] 0 containers: []
	W1216 06:21:45.982605    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:45.987711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:46.020649    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.020649    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:46.024931    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:46.058836    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.058836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:46.066651    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:46.094860    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.094860    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:46.098689    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:46.127246    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.127246    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:46.130937    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:46.159519    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.159519    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:46.163609    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:46.195483    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.195483    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:46.199178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:46.229349    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.229349    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:46.229349    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:46.229349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:46.292392    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:46.292392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:46.330903    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:46.330903    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:46.427283    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:46.427283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:46.427283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:46.458647    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:46.458647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:49.005293    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.027308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:49.061499    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.061499    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:49.064510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:49.094110    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.094110    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:49.097105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:49.126749    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.126749    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:49.132748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:49.169344    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.169344    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:49.174332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:49.209103    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.209103    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:49.214581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:49.248644    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.248644    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:49.251643    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:49.281632    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.281632    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:49.285645    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:49.316955    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.316955    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:49.316955    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:49.317942    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:49.391656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:49.391738    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:49.432724    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:49.432724    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:49.523989    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:49.523989    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:49.523989    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:49.552004    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:49.552004    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:48.467044    4424 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (14.3450525s)
	I1216 06:21:48.467044    4424 kic.go:203] duration metric: took 14.349809s to extract preloaded images to volume ...
	I1216 06:21:48.470844    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:48.730876    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:48.710057733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:48.733867    4424 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:21:48.983392    4424 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-030800 --name kubenet-030800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-030800 --network kubenet-030800 --ip 192.168.103.2 --volume kubenet-030800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:21:49.764686    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Running}}
	I1216 06:21:49.828590    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:49.890595    4424 cli_runner.go:164] Run: docker exec kubenet-030800 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:21:50.004225    4424 oci.go:144] the created container "kubenet-030800" has a running status.
	I1216 06:21:50.005228    4424 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.057161    4424 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:21:50.141101    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:50.207656    4424 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:21:50.207656    4424 kic_runner.go:114] Args: [docker exec --privileged kubenet-030800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:21:50.326664    4424 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.087090    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.087090    7800 retry.go:31] will retry after 426.637768ms: missing components: kube-dns, kube-proxy
	I1216 06:21:50.538640    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.538640    7800 retry.go:31] will retry after 479.139187ms: missing components: kube-dns
	I1216 06:21:51.025065    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.025065    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:51.025193    7800 retry.go:31] will retry after 758.159415ms: missing components: kube-dns
	I1216 06:21:51.791088    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Running
	I1216 06:21:51.791088    7800 system_pods.go:126] duration metric: took 2.6969413s to wait for k8s-apps to be running ...
	I1216 06:21:51.791088    7800 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:21:51.798336    7800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:21:51.818183    7800 system_svc.go:56] duration metric: took 27.0943ms WaitForService to wait for kubelet
	I1216 06:21:51.818183    7800 kubeadm.go:587] duration metric: took 7.2609035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:51.818183    7800 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:21:51.825244    7800 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:21:51.825244    7800 node_conditions.go:123] node cpu capacity is 16
	I1216 06:21:51.825244    7800 node_conditions.go:105] duration metric: took 7.0607ms to run NodePressure ...
	I1216 06:21:51.825244    7800 start.go:242] waiting for startup goroutines ...
	I1216 06:21:51.825244    7800 start.go:247] waiting for cluster config update ...
	I1216 06:21:51.825244    7800 start.go:256] writing updated cluster config ...
	I1216 06:21:51.833706    7800 ssh_runner.go:195] Run: rm -f paused
	I1216 06:21:51.841597    7800 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:21:51.851622    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:21:53.862268    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:52.109148    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:52.140748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:52.186855    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.186855    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:52.193111    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:52.227511    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.227511    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:52.232508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:52.265331    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.265331    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:52.270635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:52.301130    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.301130    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:52.307669    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:52.342623    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.342623    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:52.347794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:52.387246    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.387246    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:52.392629    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:52.445055    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.445143    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:52.449252    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:52.480071    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.480071    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:52.480071    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:52.480071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:52.547115    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:52.547115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:52.590630    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:52.590630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:52.690412    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:52.690412    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:52.690412    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:52.718573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:52.718573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:52.546527    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:52.603159    4424 machine.go:94] provisionDockerMachine start ...
	I1216 06:21:52.606161    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.662674    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.679442    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.679519    4424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:21:52.842464    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:52.842464    4424 ubuntu.go:182] provisioning hostname "kubenet-030800"
	I1216 06:21:52.846473    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.908771    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.908771    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.908771    4424 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-030800 && echo "kubenet-030800" | sudo tee /etc/hostname
	I1216 06:21:53.084692    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:53.088917    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.150284    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.150284    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.150284    4424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-030800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-030800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-030800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:21:53.322772    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:21:53.322772    4424 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:21:53.322772    4424 ubuntu.go:190] setting up certificates
	I1216 06:21:53.322772    4424 provision.go:84] configureAuth start
	I1216 06:21:53.326658    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:53.379472    4424 provision.go:143] copyHostCerts
	I1216 06:21:53.379472    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:21:53.379472    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:21:53.379472    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:21:53.381506    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:21:53.381506    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:21:53.382025    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:21:53.383238    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:21:53.383286    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:21:53.383622    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:21:53.384729    4424 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-030800 san=[127.0.0.1 192.168.103.2 kubenet-030800 localhost minikube]
	I1216 06:21:53.446404    4424 provision.go:177] copyRemoteCerts
	I1216 06:21:53.450578    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:21:53.453632    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.508049    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:53.625841    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:21:53.652177    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 06:21:53.678648    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:21:53.702593    4424 provision.go:87] duration metric: took 379.8156ms to configureAuth
	I1216 06:21:53.702593    4424 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:21:53.703116    4424 config.go:182] Loaded profile config "kubenet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:53.706020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.763080    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.763659    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.763659    4424 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:21:53.941197    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:21:53.941229    4424 ubuntu.go:71] root file system type: overlay
	I1216 06:21:53.941395    4424 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:21:53.945310    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.000318    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.000318    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.000318    4424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:21:54.194977    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:21:54.198986    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.262183    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.262873    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.262912    4424 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:21:55.764091    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:21:54.174803160 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:21:55.764091    4424 machine.go:97] duration metric: took 3.1608879s to provisionDockerMachine
	I1216 06:21:55.764091    4424 client.go:176] duration metric: took 23.8239056s to LocalClient.Create
	I1216 06:21:55.764091    4424 start.go:167] duration metric: took 23.8239056s to libmachine.API.Create "kubenet-030800"
	I1216 06:21:55.764091    4424 start.go:293] postStartSetup for "kubenet-030800" (driver="docker")
	I1216 06:21:55.764091    4424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:21:55.769330    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:21:55.774020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:55.832721    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:55.960433    4424 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:21:55.968801    4424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:21:55.968801    4424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:21:55.969505    4424 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:21:55.973822    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:21:55.985938    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:21:56.011522    4424 start.go:296] duration metric: took 247.4281ms for postStartSetup
	I1216 06:21:56.016962    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.071317    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:56.078704    4424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:21:56.082131    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	W1216 06:21:55.087921    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:56.146380    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.278810    4424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:21:56.289463    4424 start.go:128] duration metric: took 24.3526481s to createHost
	I1216 06:21:56.289463    4424 start.go:83] releasing machines lock for "kubenet-030800", held for 24.352923s
	I1216 06:21:56.293770    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.349762    4424 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:21:56.354527    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.355718    4424 ssh_runner.go:195] Run: cat /version.json
	I1216 06:21:56.359207    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.419217    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.420010    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.548149    4424 ssh_runner.go:195] Run: systemctl --version
	W1216 06:21:56.549226    4424 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:21:56.567514    4424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:21:56.574755    4424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:21:56.580435    4424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:21:56.633416    4424 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:21:56.633416    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:56.633416    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:56.633416    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:56.657618    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1216 06:21:56.658090    4424 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:21:56.658134    4424 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:21:56.678200    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:21:56.690681    4424 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:21:56.695430    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:21:56.714310    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.735757    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:21:56.754647    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.771876    4424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:21:56.790078    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:21:56.810936    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:21:56.828529    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:21:56.859717    4424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:21:56.876724    4424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:21:56.891719    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.036224    4424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:21:57.185425    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:57.185522    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:57.190092    4424 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:21:57.213249    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.239566    4424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:21:57.303231    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.326154    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:21:57.344861    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:57.372889    4424 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:21:57.386009    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:21:57.401220    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1216 06:21:57.422607    4424 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:21:57.590920    4424 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:21:57.727211    4424 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:21:57.727211    4424 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:21:57.751771    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:21:57.772961    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.912458    4424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:21:58.834645    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:21:58.856232    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:21:58.880727    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:58.906712    4424 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:21:59.052553    4424 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:21:59.194941    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.333924    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:21:59.357147    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:21:59.379570    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.513788    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:21:59.631489    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:59.649336    4424 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:21:59.653752    4424 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:21:59.660755    4424 start.go:564] Will wait 60s for crictl version
	I1216 06:21:59.665368    4424 ssh_runner.go:195] Run: which crictl
	I1216 06:21:59.677200    4424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:21:59.717428    4424 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:21:59.720622    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:21:59.765567    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	W1216 06:21:55.865199    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	W1216 06:21:58.365962    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:55.273773    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:55.297441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:55.334351    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.334404    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:55.338338    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:55.372344    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.372344    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:55.375335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:55.429711    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.429711    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:55.432707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:55.463415    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.463415    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:55.466882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:55.495871    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.495871    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:55.499782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:55.530135    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.530135    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:55.534032    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:55.561956    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.561956    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:55.567456    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:55.598684    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.598684    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:55.598684    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:55.598684    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:55.661553    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:55.661553    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:55.699330    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:55.699330    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:55.806271    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:55.806271    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:55.806271    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:55.839937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:55.839937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.400590    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:58.422580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:58.453081    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.453081    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:58.457176    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:58.482739    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.482739    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:58.486288    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:58.516198    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.516198    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:58.520374    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:58.550134    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.550134    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:58.553679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:58.585815    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.585815    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:58.589532    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:58.620180    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.620180    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:58.626021    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:58.656946    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.656946    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:58.659942    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:58.687618    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.687618    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:58.687618    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:58.687618    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:58.777493    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:58.777493    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:58.777493    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:58.805676    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:58.805676    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.860391    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:58.860391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:58.925444    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:58.925444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:59.807579    4424 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:21:59.810667    4424 cli_runner.go:164] Run: docker exec -t kubenet-030800 dig +short host.docker.internal
	I1216 06:21:59.962844    4424 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:21:59.967733    4424 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:21:59.974503    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:21:59.995371    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:00.053937    4424 kubeadm.go:884] updating cluster {Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:22:00.053937    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:22:00.057874    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.094105    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.094105    4424 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:22:00.097332    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.129189    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.129225    4424 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:22:00.129280    4424 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 docker true true} ...
	I1216 06:22:00.129486    4424 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:22:00.132350    4424 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:22:00.208072    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:22:00.208072    4424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:22:00.208072    4424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-030800 NodeName:kubenet-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:22:00.208072    4424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:22:00.213204    4424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:22:00.225061    4424 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:22:00.229012    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:22:00.242127    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (339 bytes)
	I1216 06:22:00.258591    4424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:22:00.278876    4424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 06:22:00.305788    4424 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:22:00.315868    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:22:00.339710    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:22:00.483171    4424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:22:00.505844    4424 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800 for IP: 192.168.103.2
	I1216 06:22:00.505844    4424 certs.go:195] generating shared ca certs ...
	I1216 06:22:00.505844    4424 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.506501    4424 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:22:00.507023    4424 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:22:00.507484    4424 certs.go:257] generating profile certs ...
	I1216 06:22:00.507484    4424 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key
	I1216 06:22:00.507484    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt with IP's: []
	I1216 06:22:00.552695    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt ...
	I1216 06:22:00.552695    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt: {Name:mk4783bd7e1619c0ea341eaca75005ddd88d5b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.553960    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key ...
	I1216 06:22:00.553960    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key: {Name:mk427571c1896a50b896e76c58a633b5512ad44e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.555335    4424 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8
	I1216 06:22:00.555661    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 06:22:00.581299    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 ...
	I1216 06:22:00.581299    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8: {Name:mk9cb22362f0ba7f5c0b5c6877c5c2e8d72eb278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.582304    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 ...
	I1216 06:22:00.582304    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8: {Name:mk2a3e21d232de7f748cffa074c96be0850cc9f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.583303    4424 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt
	I1216 06:22:00.599920    4424 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key
	I1216 06:22:00.600703    4424 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key
	I1216 06:22:00.601353    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt with IP's: []
	I1216 06:22:00.664564    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt ...
	I1216 06:22:00.664564    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt: {Name:mk02eb62f20a18ad60f930ae30a248a87b7cb658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.665010    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key ...
	I1216 06:22:00.665010    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key: {Name:mk8a8b2a6c6b1b3e2e2cc574e01303d6680bf793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.680006    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:22:00.680554    4424 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:22:00.680554    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:22:00.681404    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:22:00.683052    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:22:00.710388    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:22:00.737370    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:22:00.766290    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:22:00.790943    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:22:00.815072    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:22:00.839330    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:22:00.863340    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:22:00.921806    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:22:00.945068    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:22:00.972351    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:22:00.998813    4424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:22:01.025404    4424 ssh_runner.go:195] Run: openssl version
	I1216 06:22:01.039534    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.056142    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:22:01.077227    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.085140    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.089133    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	W1216 06:22:03.871305    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 06:22:03.871305    2100 node_ready.go:38] duration metric: took 6m0.0002926s for node "no-preload-686300" to be "Ready" ...
	I1216 06:22:03.874339    2100 out.go:203] 
	W1216 06:22:03.877050    2100 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:22:03.877050    2100 out.go:285] * 
	W1216 06:22:03.879403    2100 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:22:03.881203    2100 out.go:203] 
	W1216 06:22:00.861344    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:22:01.860562    7800 pod_ready.go:99] pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8s6v4" not found
	I1216 06:22:01.860562    7800 pod_ready.go:86] duration metric: took 10.0087717s for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:01.860562    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tcbrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:03.875170    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:01.467725    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:01.493737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:01.526377    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.526426    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:01.530527    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:01.563582    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.563582    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:01.567007    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:01.606460    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.606460    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:01.610155    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:01.647965    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.647965    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:01.652309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:01.691011    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.691011    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:01.695466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:01.736509    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.736509    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:01.739991    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:01.773642    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.773642    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:01.777617    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:01.811025    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.811141    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:01.811141    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:01.811141    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:01.881533    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:01.881533    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.919632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:01.919632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:02.020491    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:02.020538    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:02.020587    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:02.059933    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:02.060031    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:04.620893    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:04.645761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:04.680596    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.680596    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:04.684611    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:04.712607    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.712607    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:04.716737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:04.744218    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.744218    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:04.748501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:04.787600    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.787668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:04.791349    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:04.825500    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.825547    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:04.829098    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:04.878465    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.878465    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:04.881466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:04.910168    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.910168    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:04.914167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:04.949752    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.949810    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:04.949810    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:04.949873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.143585    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:22:01.161031    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:22:01.179456    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.197251    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:22:01.216028    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.226660    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.230697    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.278644    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:22:01.297647    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:22:01.317326    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.341360    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:22:01.367643    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.377139    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.383754    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.440843    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:22:01.457977    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:22:01.476683    4424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:22:01.483599    4424 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:22:01.484303    4424 kubeadm.go:401] StartCluster: {Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:22:01.490132    4424 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:22:01.529050    4424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:22:01.545461    4424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:22:01.559986    4424 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:22:01.564509    4424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:22:01.575681    4424 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:22:01.575681    4424 kubeadm.go:158] found existing configuration files:
	
	I1216 06:22:01.581349    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:22:01.593595    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:22:01.599386    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:22:01.618969    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:22:01.633516    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:22:01.638266    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:22:01.656598    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:22:01.670398    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:22:01.674972    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:22:01.695466    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:22:01.709055    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:22:01.713665    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:22:01.733357    4424 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:22:01.884136    4424 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:22:01.891445    4424 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:22:01.994223    4424 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 06:22:06.379758    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	W1216 06:22:08.874715    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:04.987656    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:04.987703    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:05.093013    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:05.093013    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:05.093013    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:05.148503    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:05.148503    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:05.222357    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:05.222357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:07.791130    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:07.816699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:07.846890    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.846890    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:07.850551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:07.885179    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.885179    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:07.889622    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:07.920925    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.920925    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:07.925517    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:07.955043    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.955043    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:07.959825    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:07.988928    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.988928    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:07.993735    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:08.025335    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.025335    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:08.031801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:08.063231    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.063231    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:08.068525    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:08.106217    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.106217    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:08.106217    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:08.106217    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:08.173411    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:08.173411    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:08.241764    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:08.241764    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:08.282741    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:08.282741    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:08.376141    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:08.376181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:08.376246    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:22:10.875960    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	W1216 06:22:13.371029    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:13.873624    7800 pod_ready.go:94] pod "coredns-66bc5c9577-tcbrk" is "Ready"
	I1216 06:22:13.873624    7800 pod_ready.go:86] duration metric: took 12.0128951s for pod "coredns-66bc5c9577-tcbrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.879094    7800 pod_ready.go:83] waiting for pod "etcd-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.889705    7800 pod_ready.go:94] pod "etcd-bridge-030800" is "Ready"
	I1216 06:22:13.889705    7800 pod_ready.go:86] duration metric: took 10.6111ms for pod "etcd-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.893578    7800 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.912416    7800 pod_ready.go:94] pod "kube-apiserver-bridge-030800" is "Ready"
	I1216 06:22:13.912416    7800 pod_ready.go:86] duration metric: took 18.8376ms for pod "kube-apiserver-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.917120    7800 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.068093    7800 pod_ready.go:94] pod "kube-controller-manager-bridge-030800" is "Ready"
	I1216 06:22:14.068093    7800 pod_ready.go:86] duration metric: took 150.9707ms for pod "kube-controller-manager-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.266154    7800 pod_ready.go:83] waiting for pod "kube-proxy-pbfkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.666596    7800 pod_ready.go:94] pod "kube-proxy-pbfkb" is "Ready"
	I1216 06:22:14.666596    7800 pod_ready.go:86] duration metric: took 400.436ms for pod "kube-proxy-pbfkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:10.906574    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:10.929977    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:10.963006    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.963006    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:10.966334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:10.995517    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.995517    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:10.998887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:11.027737    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.027771    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:11.034529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:11.070221    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.070221    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:11.075447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:11.105575    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.105575    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:11.108569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:11.143549    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.143549    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:11.146562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:11.178034    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.178034    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:11.181411    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:11.211522    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.211522    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:11.211522    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:11.211522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:11.244289    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:11.244289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:11.295870    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:11.295870    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:11.359418    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:11.360418    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:11.394416    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:11.394416    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:11.489247    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:13.994214    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:14.016691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:14.049641    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.049641    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:14.053607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:14.088893    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.088893    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:14.092847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:14.131857    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.131857    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:14.135845    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:14.168503    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.168503    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:14.172477    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:14.200948    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.200948    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:14.204642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:14.234975    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.234975    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:14.238802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:14.274052    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.274107    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:14.277642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:14.306199    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.306199    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:14.306199    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:14.306199    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:14.374972    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:14.374972    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:14.411356    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:14.411356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:14.498252    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:14.498283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:14.498283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:14.528112    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:14.528112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:14.872200    7800 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:15.267078    7800 pod_ready.go:94] pod "kube-scheduler-bridge-030800" is "Ready"
	I1216 06:22:15.267078    7800 pod_ready.go:86] duration metric: took 394.8723ms for pod "kube-scheduler-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:15.267078    7800 pod_ready.go:40] duration metric: took 23.4251556s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:15.362849    7800 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:22:15.367720    7800 out.go:179] * Done! kubectl is now configured to use "bridge-030800" cluster and "default" namespace by default
	I1216 06:22:17.092050    4424 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:22:17.093065    4424 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:22:17.093065    4424 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:22:17.096059    4424 out.go:252]   - Generating certificates and keys ...
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:22:17.098050    4424 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:22:17.099055    4424 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:22:17.099055    4424 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:22:17.102055    4424 out.go:252]   - Booting up control plane ...
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:22:17.104058    4424 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:22:17.104058    4424 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.507351804s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.957344338s
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.90080548s
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002224001s
	I1216 06:22:17.106067    4424 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:22:17.106067    4424 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:22:17.106067    4424 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:22:17.107057    4424 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:22:17.107057    4424 kubeadm.go:319] [bootstrap-token] Using token: rs8etp.b2dh1vgtia9jcvb4
	I1216 06:22:17.081041    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:17.103056    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:17.137059    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.137059    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:17.141064    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:17.172640    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.172640    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:17.176638    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:17.210910    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.210910    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:17.215347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:17.248986    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.248986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:17.252989    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:17.287415    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.287415    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:17.293572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:17.324098    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.324098    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:17.330062    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:17.366512    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.366512    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:17.370101    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:17.402400    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.402400    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:17.402400    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:17.402400    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:17.455027    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:17.455027    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:17.513029    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:17.513029    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:17.548022    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:17.548022    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:17.645629    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:17.645629    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:17.645629    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:17.110053    4424 out.go:252]   - Configuring RBAC rules ...
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:22:17.111060    4424 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:22:17.111060    4424 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:22:17.111060    4424 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:22:17.111060    4424 kubeadm.go:319] 
	I1216 06:22:17.111060    4424 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:22:17.111060    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:22:17.112052    4424 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:22:17.112052    4424 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:22:17.113053    4424 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:22:17.113053    4424 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:22:17.113053    4424 kubeadm.go:319] 
	I1216 06:22:17.113053    4424 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:22:17.113053    4424 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:22:17.113053    4424 kubeadm.go:319] 
	I1216 06:22:17.113053    4424 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rs8etp.b2dh1vgtia9jcvb4 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--control-plane 
	I1216 06:22:17.114052    4424 kubeadm.go:319] 
	I1216 06:22:17.114052    4424 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:22:17.114052    4424 kubeadm.go:319] 
	I1216 06:22:17.114052    4424 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rs8etp.b2dh1vgtia9jcvb4 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:22:17.114052    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:22:17.114052    4424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:22:17.122049    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:17.122049    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-030800 minikube.k8s.io/updated_at=2025_12_16T06_22_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=kubenet-030800 minikube.k8s.io/primary=true
	I1216 06:22:17.134054    4424 ops.go:34] apiserver oom_adj: -16
	I1216 06:22:17.253989    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:17.753536    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:18.254825    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:18.755186    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:19.255440    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:19.754492    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:20.256463    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:20.753254    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.253896    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.753097    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.858877    4424 kubeadm.go:1114] duration metric: took 4.7437541s to wait for elevateKubeSystemPrivileges
	I1216 06:22:21.858877    4424 kubeadm.go:403] duration metric: took 20.3742909s to StartCluster
	I1216 06:22:21.858877    4424 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:21.858877    4424 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:22:21.861003    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:21.861972    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:22:21.861972    4424 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:22:21.861972    4424 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:22:21.861972    4424 addons.go:70] Setting storage-provisioner=true in profile "kubenet-030800"
	I1216 06:22:21.861972    4424 addons.go:239] Setting addon storage-provisioner=true in "kubenet-030800"
	I1216 06:22:21.861972    4424 addons.go:70] Setting default-storageclass=true in profile "kubenet-030800"
	I1216 06:22:21.861972    4424 config.go:182] Loaded profile config "kubenet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:22:21.861972    4424 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-030800"
	I1216 06:22:21.861972    4424 host.go:66] Checking if "kubenet-030800" exists ...
	I1216 06:22:21.864167    4424 out.go:179] * Verifying Kubernetes components...
	I1216 06:22:21.875224    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:21.875857    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:21.875857    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:22:21.939068    4424 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:22:21.939740    4424 addons.go:239] Setting addon default-storageclass=true in "kubenet-030800"
	I1216 06:22:21.939740    4424 host.go:66] Checking if "kubenet-030800" exists ...
	I1216 06:22:21.942493    4424 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:22:21.942493    4424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:22:21.947611    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:21.951961    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:22.001257    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:22:22.003241    4424 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:22:22.003241    4424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:22:22.006248    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:22.070295    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:22:22.425928    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:22:22.444230    4424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:22:22.451290    4424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:22:22.540661    4424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:22:24.151685    4424 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.7257338s)
	I1216 06:22:24.151837    4424 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:22:24.529871    4424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.0785053s)
	I1216 06:22:24.529983    4424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.0856125s)
	I1216 06:22:24.530029    4424 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.9893406s)
	I1216 06:22:24.535621    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:24.547997    4424 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:22:20.178315    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:20.202308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:20.231344    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.231344    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:20.236317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:20.279459    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.279459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:20.283465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:20.322463    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.322463    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:20.327465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:20.366466    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.366466    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:20.371478    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:20.409468    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.409468    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:20.413471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:20.447432    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.447432    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:20.451099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:20.486103    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.486103    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:20.490094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:20.530098    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.530098    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:20.530098    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:20.530098    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:20.557089    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:20.557089    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:20.606234    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:20.607239    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:20.667498    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:20.667498    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:20.703674    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:20.703674    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:20.796605    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.300916    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:23.324266    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:23.355598    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.355598    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:23.359141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:23.390554    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.390644    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:23.394340    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:23.423019    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.423019    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:23.426772    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:23.456953    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.457021    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:23.460762    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:23.491477    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.491477    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:23.495183    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:23.527107    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.527107    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:23.531577    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:23.559306    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.559306    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:23.563381    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:23.592615    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.592615    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:23.592615    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:23.592615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:23.630103    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:23.630103    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:23.719384    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.719514    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:23.719546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:23.746097    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:23.746097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:23.807727    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:23.807727    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:24.550004    4424 addons.go:530] duration metric: took 2.6879945s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:22:24.591996    4424 node_ready.go:35] waiting up to 15m0s for node "kubenet-030800" to be "Ready" ...
	I1216 06:22:24.646202    4424 node_ready.go:49] node "kubenet-030800" is "Ready"
	I1216 06:22:24.646202    4424 node_ready.go:38] duration metric: took 54.2051ms for node "kubenet-030800" to be "Ready" ...
	I1216 06:22:24.646202    4424 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:22:24.652200    4424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:24.721472    4424 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-030800" context rescaled to 1 replicas
	I1216 06:22:24.735392    4424 api_server.go:72] duration metric: took 2.87338s to wait for apiserver process to appear ...
	I1216 06:22:24.735392    4424 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:22:24.735392    4424 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56385/healthz ...
	I1216 06:22:24.821241    4424 api_server.go:279] https://127.0.0.1:56385/healthz returned 200:
	ok
	I1216 06:22:24.825583    4424 api_server.go:141] control plane version: v1.34.2
	I1216 06:22:24.825583    4424 api_server.go:131] duration metric: took 90.1899ms to wait for apiserver health ...
	I1216 06:22:24.825583    4424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:22:24.832936    4424 system_pods.go:59] 8 kube-system pods found
	I1216 06:22:24.833022    4424 system_pods.go:61] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.833022    4424 system_pods.go:61] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.833022    4424 system_pods.go:61] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:24.833022    4424 system_pods.go:61] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:24.833131    4424 system_pods.go:74] duration metric: took 7.4392ms to wait for pod list to return data ...
	I1216 06:22:24.833131    4424 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:22:24.838156    4424 default_sa.go:45] found service account: "default"
	I1216 06:22:24.838156    4424 default_sa.go:55] duration metric: took 5.0253ms for default service account to be created ...
	I1216 06:22:24.838156    4424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:22:24.844228    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:24.844228    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.844228    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.844228    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:24.844228    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:24.844228    4424 retry.go:31] will retry after 236.325715ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.105694    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.105749    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.105770    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.105770    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.105770    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.105848    4424 retry.go:31] will retry after 372.640753ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.532382    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.532482    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.532513    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.532513    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.532587    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.532611    4424 retry.go:31] will retry after 313.138834ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.853141    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.853661    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.853661    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.853661    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.853661    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.853715    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.853777    4424 retry.go:31] will retry after 472.942865ms: missing components: kube-dns, kube-proxy
	I1216 06:22:26.382913    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:26.404112    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:26.436722    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.436722    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:26.440749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:26.470877    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.470877    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:26.474941    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:26.503887    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.503950    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:26.508216    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:26.538317    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.538317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:26.542754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:26.571126    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.571189    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:26.574883    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:26.604762    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.604762    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:26.608705    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:26.637404    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.637444    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:26.641214    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:26.669720    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.669720    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:26.669720    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:26.669720    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:26.707289    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:26.707289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:26.791357    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:26.791357    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:26.791357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:26.817227    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:26.817227    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:26.865832    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:26.865832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.436231    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:29.459817    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:29.493134    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.493186    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:29.497118    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:29.526722    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.526722    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:29.531481    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:29.561672    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.561718    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:29.566882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:29.595896    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.595947    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:29.599655    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:29.628575    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.628661    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:29.632644    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:29.660164    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.660164    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:29.663829    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:29.694413    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.694413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:29.698152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:29.725286    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.725286    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:29.725355    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:29.725355    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.787721    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:29.787721    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:29.828376    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:29.828376    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:29.916249    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:29.916249    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:29.916249    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:29.942276    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:29.942276    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:26.336069    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:26.336069    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:26.336069    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:26.336069    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Running
	I1216 06:22:26.336069    4424 system_pods.go:126] duration metric: took 1.4978916s to wait for k8s-apps to be running ...
	I1216 06:22:26.336069    4424 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:22:26.342244    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:22:26.368294    4424 system_svc.go:56] duration metric: took 32.1861ms WaitForService to wait for kubelet
	I1216 06:22:26.368345    4424 kubeadm.go:587] duration metric: took 4.5062595s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:22:26.368345    4424 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:22:26.376647    4424 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:22:26.376691    4424 node_conditions.go:123] node cpu capacity is 16
	I1216 06:22:26.376745    4424 node_conditions.go:105] duration metric: took 8.3456ms to run NodePressure ...
	I1216 06:22:26.376745    4424 start.go:242] waiting for startup goroutines ...
	I1216 06:22:26.376745    4424 start.go:247] waiting for cluster config update ...
	I1216 06:22:26.376795    4424 start.go:256] writing updated cluster config ...
	I1216 06:22:26.382913    4424 ssh_runner.go:195] Run: rm -f paused
	I1216 06:22:26.391122    4424 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:26.399112    4424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:28.410987    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	W1216 06:22:30.912607    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	I1216 06:22:32.497361    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:32.517362    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:32.549841    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.549912    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:32.553592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:32.582070    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.582070    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:32.585068    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:32.612095    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.612095    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:32.615889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:32.644953    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.644953    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:32.649025    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:32.676348    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.676429    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:32.680134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:32.708040    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.708040    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:32.712034    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:32.745789    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.745789    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:32.752533    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:32.781449    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.781504    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:32.781504    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:32.781504    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:32.843135    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:32.843135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:32.881564    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:32.881564    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:32.982597    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:32.982597    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:32.982597    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:33.013212    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:33.013212    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:22:33.410898    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	W1216 06:22:35.912070    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	I1216 06:22:35.578218    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:35.601163    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:35.629786    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.629786    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:35.634440    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:35.663168    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.663168    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:35.667699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:35.699050    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.699050    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:35.703224    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:35.736149    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.736149    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:35.741542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:35.772450    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.772450    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:35.776692    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:35.804150    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.804150    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:35.808799    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:35.837871    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.837871    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:35.841100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:35.870769    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.870769    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:35.870769    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:35.870769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:35.934803    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:35.934803    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:35.973201    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:35.973201    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:36.070057    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:36.070057    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:36.070057    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:36.098690    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:36.098690    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:38.663786    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:38.688639    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:38.718646    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.718646    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:38.721640    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:38.751651    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.751651    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:38.754647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:38.784327    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.784327    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:38.788327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:38.815337    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.815337    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:38.818328    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:38.846331    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.846331    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:38.849339    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:38.880297    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.880297    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:38.884227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:38.917702    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.917702    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:38.920940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:38.964973    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.964973    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:38.964973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:38.964973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:38.999971    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:38.999971    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:39.102927    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:39.102927    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:39.102927    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:39.141934    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:39.141934    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:39.210081    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:39.210081    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:36.404625    4424 pod_ready.go:99] pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8qrgg" not found
	I1216 06:22:36.404625    4424 pod_ready.go:86] duration metric: took 10.0053735s for pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:36.404625    4424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7zmc" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:38.415310    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:40.417680    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:41.775031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:41.798710    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:41.831778    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.831778    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:41.835461    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:41.866411    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.866411    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:41.871544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:41.902486    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.902486    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:41.905907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:41.932887    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.932887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:41.935886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:41.965890    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.965890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:41.968887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:42.000893    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.000893    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:42.004876    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:42.043522    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.043591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:42.049149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:42.081678    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.081678    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:42.081678    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:42.081678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:42.140208    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:42.140208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:42.198197    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:42.198197    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:42.241586    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:42.241586    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:42.350617    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:42.350617    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:42.350617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:44.884303    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:44.902304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:44.933421    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.933421    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:44.938149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:44.974292    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.974334    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:44.977512    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	W1216 06:22:42.418518    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:44.914304    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:45.010620    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.010620    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:45.013618    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:45.047628    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.047628    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:45.050627    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:45.089756    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.089850    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:45.096356    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:45.137323    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.137323    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:45.141322    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:45.169330    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.170335    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:45.173321    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:45.202336    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.202336    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:45.202336    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:45.202336    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:45.227331    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:45.227331    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:45.275577    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:45.275630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:45.335206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:45.335206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:45.372222    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:45.372222    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:45.471935    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:47.976320    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:48.004505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:48.037430    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.037430    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:48.040437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:48.076428    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.076477    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:48.081194    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:48.118536    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.118536    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:48.124810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:48.153702    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.153702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:48.159558    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:48.187736    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.187736    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:48.192607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:48.225619    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.225619    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:48.229580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:48.260085    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.260085    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:48.263087    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:48.294313    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.294376    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:48.294376    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:48.294425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:48.345094    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:48.345094    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:48.423576    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:48.423576    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:48.459577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:48.459577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:48.548441    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:48.548441    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:48.548441    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:22:47.414818    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:49.417236    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:51.080561    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:51.104134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:51.132144    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.132144    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:51.136151    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:51.163962    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.163962    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:51.169361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:51.198404    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.198404    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:51.201253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:51.229899    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.229899    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:51.232895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:51.261881    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.261881    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:51.264887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:51.295306    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.295306    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:51.298763    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:51.331779    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.331850    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:51.337211    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:51.367502    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.367502    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:51.367502    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:51.367502    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:51.424226    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:51.424226    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:51.482475    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:51.482475    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:51.527426    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:51.527426    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:51.618444    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:51.618444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:51.618444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.148108    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:54.167190    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:54.198456    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.198456    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:54.202605    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:54.236901    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.236901    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:54.240906    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:54.272541    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.272541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:54.277008    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:54.312764    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.312764    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:54.317359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:54.347564    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.347564    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:54.350557    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:54.377557    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.377557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:54.381564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:54.411585    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.411585    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:54.415565    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:54.447567    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.447567    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:54.447567    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:54.447567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:54.483559    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:54.483559    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:54.589583    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:54.589583    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:54.589583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.617283    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:54.617349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:54.673906    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:54.673990    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 06:22:51.420194    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:53.916809    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:55.919718    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:58.419688    4424 pod_ready.go:94] pod "coredns-66bc5c9577-w7zmc" is "Ready"
	I1216 06:22:58.419688    4424 pod_ready.go:86] duration metric: took 22.0147573s for pod "coredns-66bc5c9577-w7zmc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.424677    4424 pod_ready.go:83] waiting for pod "etcd-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.432677    4424 pod_ready.go:94] pod "etcd-kubenet-030800" is "Ready"
	I1216 06:22:58.432677    4424 pod_ready.go:86] duration metric: took 7.9992ms for pod "etcd-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.435689    4424 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.459477    4424 pod_ready.go:94] pod "kube-apiserver-kubenet-030800" is "Ready"
	I1216 06:22:58.459477    4424 pod_ready.go:86] duration metric: took 22.793ms for pod "kube-apiserver-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.463834    4424 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.611617    4424 pod_ready.go:94] pod "kube-controller-manager-kubenet-030800" is "Ready"
	I1216 06:22:58.611617    4424 pod_ready.go:86] duration metric: took 147.7381ms for pod "kube-controller-manager-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.811398    4424 pod_ready.go:83] waiting for pod "kube-proxy-5b9l9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.211755    4424 pod_ready.go:94] pod "kube-proxy-5b9l9" is "Ready"
	I1216 06:22:59.211755    4424 pod_ready.go:86] duration metric: took 400.3513ms for pod "kube-proxy-5b9l9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.412761    4424 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.811735    4424 pod_ready.go:94] pod "kube-scheduler-kubenet-030800" is "Ready"
	I1216 06:22:59.811813    4424 pod_ready.go:86] duration metric: took 399.0464ms for pod "kube-scheduler-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.811850    4424 pod_ready.go:40] duration metric: took 33.4202632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:59.926671    4424 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:22:59.930035    4424 out.go:179] * Done! kubectl is now configured to use "kubenet-030800" cluster and "default" namespace by default
	I1216 06:22:57.250472    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:57.271468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:57.303800    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.303800    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:57.306801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:57.338803    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.338803    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:57.341800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:57.369018    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.369018    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:57.372806    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:57.403510    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.403510    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:57.406808    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:57.440995    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.440995    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:57.444225    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:57.475612    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.475612    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:57.479607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:57.509842    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.509842    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:57.513186    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:57.545981    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.545981    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:57.545981    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:57.545981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:57.636635    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:57.636635    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:57.636635    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:57.662639    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:57.662639    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:57.720464    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:57.720464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:57.782460    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:57.782460    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.324364    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:00.344368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:00.375358    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.375358    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:00.378355    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:00.410368    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.410368    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:00.414359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:00.442364    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.442364    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:00.446359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:00.476371    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.476371    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:00.479359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:00.508323    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.508323    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:00.512431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:00.550611    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.550611    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:00.553606    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:00.586336    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.586336    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:00.590552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:00.624129    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.624129    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:00.624129    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:00.624129    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:00.685547    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:00.685547    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.737417    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:00.737417    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:00.858025    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:00.858025    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:00.858025    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:00.886607    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:00.886607    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:03.463847    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:03.826614    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:03.881622    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.881622    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:03.887610    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:03.936557    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.937539    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:03.941562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:03.979542    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.979542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:03.983550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:04.020535    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.020535    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:04.025547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:04.064541    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.064541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:04.068548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:04.101538    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.101538    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:04.104544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:04.141752    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.141752    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:04.146757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:04.182755    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.182755    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:04.182755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:04.182755    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:04.305758    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:04.305758    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:04.356425    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:04.356425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:04.487429    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:04.487429    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:04.487429    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:04.526318    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:04.526362    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.087022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:07.110346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:07.137790    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.137790    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:07.141786    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:07.174601    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.174601    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:07.179419    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:07.211656    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.211656    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:07.216897    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:07.250459    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.250459    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:07.254048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:07.282207    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.282207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:07.285851    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:07.313925    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.313925    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:07.317529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:07.348851    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.348851    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:07.353083    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:07.381401    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.381401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:07.381401    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:07.381401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:07.408641    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:07.408641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.450935    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:07.450935    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:07.512733    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:07.512733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:07.552522    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:07.552522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:07.649624    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.155054    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:10.178201    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:10.207068    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.207068    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:10.210473    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:10.239652    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.239652    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:10.242766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:10.274887    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.274887    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:10.278519    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:10.308294    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.308351    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:10.312209    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:10.342572    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.342572    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:10.346437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:10.375569    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.375630    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:10.378861    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:10.405446    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.405446    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:10.410730    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:10.441244    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.441244    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:10.441244    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:10.441244    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:10.502753    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:10.502753    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:10.540437    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:10.540437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:10.626853    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.626853    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:10.626853    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:10.654987    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:10.655058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.213336    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:13.237358    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:13.266636    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.266721    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:13.270023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:13.297369    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.297434    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:13.300782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:13.336039    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.336039    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:13.341919    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:13.370523    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.370523    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:13.374455    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:13.404606    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.404606    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:13.408542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:13.437373    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.437431    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:13.441106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:13.470738    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.470738    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:13.474495    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:13.502203    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.502262    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:13.502262    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:13.502293    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.552578    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:13.552578    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:13.617499    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:13.617499    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:13.660047    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:13.660047    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:13.747316    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:13.747316    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:13.747316    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.284216    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:16.307907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:16.344535    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.344535    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:16.347847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:16.379001    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.379021    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:16.382292    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:16.413093    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.413116    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:16.418012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:16.456763    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.456826    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:16.460621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:16.491671    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.491693    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:16.495352    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:16.527862    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.527862    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:16.534704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:16.564194    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.564243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:16.570369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:16.601444    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.601444    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:16.601444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:16.601444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.631785    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:16.631785    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:16.675190    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:16.675190    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:16.737700    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:16.737700    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:16.775092    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:16.775092    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:16.865026    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.370669    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:19.393524    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:19.423405    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.423513    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:19.427307    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:19.459137    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.459238    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:19.462635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:19.493542    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.493542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:19.497334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:19.526496    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.526496    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:19.529949    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:19.559120    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.559120    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:19.562460    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:19.591305    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.591305    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:19.595794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:19.625200    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.626193    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:19.629187    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:19.657201    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.657201    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:19.657270    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:19.657270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:19.722496    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:19.722496    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:19.761161    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:19.761161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:19.852755    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.853756    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:19.853756    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:19.880330    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:19.881280    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.458668    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:22.483505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:22.514647    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.514647    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:22.518193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:22.551494    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.551494    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:22.555268    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:22.586119    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.586119    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:22.590107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:22.621733    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.621733    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:22.624739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:22.651728    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.651728    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:22.655725    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:22.687826    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.687826    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:22.692217    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:22.727413    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.727413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:22.731318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:22.769477    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.769477    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:22.770462    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:22.770462    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:22.795455    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:22.795455    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.851473    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:22.851473    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:22.911454    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:22.912459    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:22.948112    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:22.948112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:23.039238    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:25.544174    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:25.571784    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:25.610368    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.610422    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:25.615377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:25.651080    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.651129    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:25.655234    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:25.695942    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.695942    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:25.700548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:25.727743    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.727743    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:25.730739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:25.765620    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.765650    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:25.769261    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:25.805072    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.805127    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:25.810318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:25.840307    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.840307    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:25.844490    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:25.888279    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.888279    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:25.888279    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:25.888279    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:25.964206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:25.964206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:26.003275    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:26.003275    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:26.111485    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:26.111485    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:26.111485    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:26.146819    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:26.146819    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:28.694382    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:28.716947    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:28.753062    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.753062    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:28.756810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:28.789692    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.789692    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:28.794681    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:28.823690    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.823690    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:28.827683    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:28.858686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.858686    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:28.861688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:28.891686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.891686    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:28.894684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:28.923683    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.923683    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:28.926684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:28.958314    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.958314    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:28.962325    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:28.991317    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.991317    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:28.991317    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:28.991317    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:29.039348    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:29.039348    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:29.103117    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:29.103117    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:29.148003    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:29.148003    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:29.240448    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:29.240448    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:29.240448    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:31.772923    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:31.796203    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:31.827485    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.827485    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:31.830572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:31.873718    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.873718    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:31.877445    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:31.926391    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.926391    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:31.929391    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:31.964572    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.964572    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:31.968096    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:32.003776    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.003776    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:32.007175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:32.046322    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.046322    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:32.049283    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:32.077299    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.077299    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:32.080289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:32.114717    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.114793    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:32.114793    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:32.114843    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:32.191987    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:32.191987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:32.237143    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:32.237143    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:32.331899    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:32.331899    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:32.331899    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:32.362021    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:32.362021    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:34.918825    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:34.945647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:34.976745    8452 logs.go:282] 0 containers: []
	W1216 06:23:34.976745    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:34.980636    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:35.012295    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.012295    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:35.015295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:35.047289    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.047289    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:35.050289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:35.081492    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.081492    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:35.085580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:35.121645    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.121645    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:35.126840    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:35.167976    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.167976    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:35.170966    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:35.201969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.201969    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:35.204969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:35.232969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.233980    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:35.233980    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:35.233980    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:35.292973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:35.292973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:35.327973    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:35.327973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:35.420114    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:35.420114    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:35.420114    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:35.451148    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:35.451148    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:38.010056    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:38.035506    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:38.071853    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.071853    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:38.075564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:38.106543    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.106543    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:38.109547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:38.143669    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.143669    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:38.152737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:38.191923    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.191923    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:38.195575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:38.225935    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.225935    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:38.228939    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:38.268550    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.268550    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:38.271759    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:38.304387    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.304421    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:38.307849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:38.341968    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.341968    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:38.341968    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:38.341968    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:38.404267    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:38.404267    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:38.443104    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:38.443104    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:38.551474    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:38.551474    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:38.551474    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:38.582843    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:38.582869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.141896    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:41.185331    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:41.218961    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.219548    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:41.223789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:41.252376    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.252376    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:41.255368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:41.285378    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.285378    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:41.288369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:41.318383    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.318383    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:41.321372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:41.349373    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.349373    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:41.353377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:41.390105    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.390105    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:41.393103    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:41.425109    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.425109    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:41.428107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:41.462594    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.462594    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:41.462594    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:41.462594    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:41.492096    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:41.492156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.553755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:41.553806    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:41.622329    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:41.622329    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:41.664016    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:41.664016    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:41.759009    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:44.265223    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:44.286309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:44.319583    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.319583    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:44.324575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:44.358046    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.358114    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:44.361895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:44.390541    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.390541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:44.395354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:44.433163    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.433163    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:44.436754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:44.470605    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.470605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:44.475856    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:44.504412    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.504484    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:44.508013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:44.540170    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.540170    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:44.545802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:44.574593    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.575118    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:44.575181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:44.575181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:44.609181    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:44.609231    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:44.663988    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:44.663988    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:44.737678    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:44.737678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:44.777530    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:44.777530    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:44.868751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:47.373432    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:47.674375    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:47.705067    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.705067    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:47.709193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:47.739921    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.739921    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:47.743656    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:47.771970    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.771970    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:47.776451    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:47.808633    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.808633    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:47.813124    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:47.856079    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.856079    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:47.859452    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:47.891897    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.891897    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:47.895769    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:47.926050    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.926050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:47.929679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:47.962571    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.962571    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:47.962571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:47.962571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:48.026367    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:48.026367    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:48.063580    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:48.063580    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:48.173751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:48.173792    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:48.173792    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:48.199403    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:48.199403    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:50.750699    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:50.774573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:50.804983    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.804983    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:50.808894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:50.838533    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.838533    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:50.842242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:50.873377    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.873377    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:50.877508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:50.907646    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.907646    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:50.912410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:50.943853    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.943853    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:50.950275    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:50.977570    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.977570    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:50.982575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:51.010211    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.010211    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:51.014545    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:51.048584    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.048584    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:51.048584    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:51.048584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:51.112725    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:51.112725    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:51.150854    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:51.150854    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:51.246494    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:51.246535    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:51.246535    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:51.274873    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:51.274873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:53.832981    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:53.857995    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:53.892159    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.892159    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:53.895775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:53.926160    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.926160    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:53.929408    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:53.956482    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.956552    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:53.959711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:53.989788    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.989788    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:53.993230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:54.022506    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.022506    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:54.025409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:54.054974    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.054974    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:54.059372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:54.088015    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.088015    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:54.092123    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:54.121961    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.121961    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:54.121961    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:54.121961    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:54.169232    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:54.169295    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:54.230158    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:54.231156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:54.267713    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:54.267713    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:54.368006    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:54.368006    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:54.368006    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:56.899723    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:56.923149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:56.957635    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.957635    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:56.961499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:56.988363    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.988363    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:56.992371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:57.021993    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.021993    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:57.025544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:57.055718    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.055718    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:57.060969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:57.092456    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.092523    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:57.096418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:57.125588    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.125588    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:57.129665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:57.160663    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.160663    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:57.164518    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:57.196231    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.196281    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:57.196281    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:57.196281    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:57.258973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:57.258973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:57.302939    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:57.302939    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:57.397577    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:57.397577    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:57.397577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:57.434801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:57.434801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:59.991022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:00.014170    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:00.046529    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.046529    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:00.050903    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:00.080796    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.080796    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:00.084418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:00.114858    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.114858    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:00.121404    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:00.152596    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.152596    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:00.156447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:00.183532    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.183648    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:00.187074    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:00.218971    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.218971    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:00.222929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:00.252086    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.252086    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:00.256309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:00.285884    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.285884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:00.285884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:00.285884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:00.364208    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:00.364208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:00.403464    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:00.403464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:00.495864    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:00.495864    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:00.495864    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:00.521592    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:00.521592    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:03.070724    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:03.093858    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:03.127112    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.127112    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:03.131265    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:03.161262    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.161262    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:03.165073    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:03.195882    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.195933    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:03.200488    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:03.230205    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.230205    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:03.234193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:03.263580    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.263629    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:03.267410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:03.297599    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.297652    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:03.300957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:03.329666    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.329720    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:03.333378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:03.365184    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.365236    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:03.365282    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:03.365282    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:03.428385    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:03.428385    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:03.465984    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:03.465984    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:03.557873    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:03.559101    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:03.559101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:03.586791    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:03.586791    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:06.142562    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:06.170227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:06.202672    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.202672    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:06.206691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:06.237624    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.237624    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:06.241559    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:06.267616    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.267616    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:06.271709    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:06.304567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.304567    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:06.308556    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:06.337567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.337567    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:06.344744    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:06.373520    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.373520    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:06.377184    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:06.411936    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.411936    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:06.415789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:06.447263    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.447263    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:06.447263    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:06.447263    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:06.509097    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:06.509097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:06.546188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:06.546188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:06.639923    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:06.639923    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:06.639923    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:06.666485    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:06.666519    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.221249    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:09.244788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:09.276490    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.276490    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:09.280706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:09.309520    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.309520    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:09.313105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:09.339092    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.339092    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:09.343484    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:09.369046    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.369046    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:09.373188    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:09.403810    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.403810    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:09.407108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:09.437156    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.437156    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:09.441754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:09.469752    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.469810    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:09.473378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:09.503754    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.503754    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:09.503754    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:09.503754    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:09.533645    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:09.533718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.587529    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:09.587529    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:09.647801    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:09.647801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:09.686577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:09.686577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:09.782674    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.288199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:12.313967    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:12.344043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.344043    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:12.348347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:12.378683    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.378683    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:12.382106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:12.411599    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.411599    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:12.415131    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:12.445826    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.445873    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:12.450940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:12.481043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.481078    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:12.484800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:12.512969    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.512990    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:12.515915    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:12.548151    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.548228    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:12.551706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:12.584039    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.584039    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:12.584039    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:12.584039    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:12.646680    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:12.646680    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:12.686545    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:12.686545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:12.804767    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.804767    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:12.804767    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:12.831866    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:12.831866    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:15.392415    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:15.416435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:15.445044    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.445044    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:15.449260    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:15.476688    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.476688    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:15.481012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:15.508866    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.508928    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:15.512662    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:15.541002    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.541002    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:15.545363    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:15.574947    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.574991    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:15.578407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:15.604751    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.604751    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:15.608699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:15.639261    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.639338    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:15.642317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:15.674404    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.674404    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:15.674404    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:15.674404    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:15.736218    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:15.736218    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:15.774188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:15.774188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:15.862546    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:15.862546    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:15.862546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:15.888115    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:15.888115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.441031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:18.465207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:18.495447    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.495481    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:18.498929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:18.528412    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.528476    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:18.531543    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:18.560175    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.560175    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:18.563996    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:18.592824    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.592894    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:18.596175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:18.623746    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.623746    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:18.627099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:18.652978    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.653013    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:18.656407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:18.683637    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.683686    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:18.687125    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:18.716903    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.716942    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:18.716964    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:18.716981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:18.743123    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:18.743675    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.794891    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:18.794891    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:18.858345    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:18.858345    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:18.894242    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:18.894242    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:18.979844    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
E1216 06:25:49.293130   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
	** stderr ** 
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:21.485585    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:21.510290    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:21.539823    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.539823    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:21.543159    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:21.575241    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.575241    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:21.579330    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:21.607389    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.607490    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:21.611023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:21.642332    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.642332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:21.645973    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:21.671339    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.671390    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:21.675048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:21.704483    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.704483    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:21.708499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:21.734944    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.735027    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:21.738688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:21.768890    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.768890    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:21.768987    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:21.768987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:21.800297    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:21.800344    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:21.854571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:21.854571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:21.921230    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:21.921230    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:21.961787    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:21.961787    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:22.060842    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.566957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:24.591909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:24.624010    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.624010    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:24.627550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:24.657938    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.657938    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:24.661917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:24.688848    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.688848    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:24.692388    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:24.722130    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.722165    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:24.725802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:24.754067    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.754134    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:24.757294    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:24.783522    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.783595    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:24.787022    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:24.818531    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.818531    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:24.822200    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:24.851316    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.851371    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:24.851391    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:24.851391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:24.940030    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.941511    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:24.941511    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:24.967127    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:24.967127    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:25.018271    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:25.018358    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:25.077769    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:25.077769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:27.621222    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:27.644179    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:27.675033    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.675033    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:27.678724    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:27.707945    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.707945    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:27.712443    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:27.740635    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.740635    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:27.744539    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:27.775332    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.775332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:27.779621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:27.807884    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.807884    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:27.812207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:27.843877    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.843877    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:27.850126    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:27.878365    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.878365    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:27.883323    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:27.911733    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.911733    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:27.911733    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:27.911733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:27.975085    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:27.975085    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:28.011926    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:28.011926    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:28.117959    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:28.117959    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:28.117959    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:28.146135    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:28.146135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:30.702904    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:30.732783    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:30.768726    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.768726    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:30.772432    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:30.804888    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.804888    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:30.809005    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:30.839403    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.839403    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:30.843668    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:30.874013    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.874013    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:30.878013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:30.906934    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.906934    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:30.911178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:30.936942    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.936942    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:30.940954    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:30.967843    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.967843    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:30.973798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:31.000614    8452 logs.go:282] 0 containers: []
	W1216 06:24:31.000614    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:31.000614    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:31.000614    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:31.063545    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:31.063545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:31.101704    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:31.101704    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:31.201356    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:31.201356    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:31.201356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:31.229634    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:31.229634    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:33.780745    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:33.805148    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:33.836319    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.836319    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:33.840094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:33.872138    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.872167    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:33.875487    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:33.908318    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.908318    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:33.912197    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:33.940179    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.940223    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:33.944152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:33.974912    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.974912    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:33.978728    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:34.004557    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.004557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:34.008971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:34.037591    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.037591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:34.041537    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:34.073153    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.073153    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:34.073153    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:34.073153    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:34.139585    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:34.139585    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:34.177888    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:34.177888    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:34.273589    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:34.273589    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:34.273589    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:34.298805    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:34.298805    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:36.851957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:36.889887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:36.919682    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.919682    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:36.923468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:36.953008    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.953073    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:36.957253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:36.985770    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.985770    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:36.989059    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:37.015702    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.015702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:37.019508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:37.046311    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.046351    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:37.050327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:37.087936    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.087936    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:37.092175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:37.121271    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.121271    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:37.125767    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:37.153753    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.153814    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:37.153814    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:37.153869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:37.218058    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:37.218058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:37.256162    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:37.257161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:37.349292    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:37.349292    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:37.349292    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:37.378861    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:37.379384    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:39.931797    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:39.956069    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:39.991154    8452 logs.go:282] 0 containers: []
	W1216 06:24:39.991154    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:39.994809    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:40.021488    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.021488    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:40.025604    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:40.055464    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.055464    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:40.059576    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:40.085410    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.086402    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:40.090048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:40.120389    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.120389    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:40.125766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:40.159925    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.159962    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:40.163912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:40.190820    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.190820    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:40.194350    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:40.223821    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.223886    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:40.223886    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:40.223886    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:40.292033    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:40.292033    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:40.331274    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:40.331274    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:40.423708    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:40.423708    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:40.423708    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:40.452101    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:40.452101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.005925    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:43.029165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:43.060601    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.060601    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:43.064304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:43.092446    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.092446    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:43.096552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:43.127295    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.127347    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:43.130913    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:43.159919    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.159986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:43.163049    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:43.190310    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.190384    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:43.194093    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:43.223641    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.223641    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:43.227270    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:43.254592    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.254592    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:43.259912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:43.293166    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.293166    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:43.293166    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:43.293166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:43.328685    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:43.328685    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:43.412970    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:43.413012    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:43.413042    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:43.444573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:43.444573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.501857    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:43.501857    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.068154    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:46.095291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:46.125740    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.125740    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:46.131016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:46.160926    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.160926    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:46.164909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:46.192634    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.192634    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:46.196346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:46.224203    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.224203    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:46.228650    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:46.255541    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.255541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:46.259732    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:46.289377    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.289377    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:46.293566    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:46.321342    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.321342    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:46.325492    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:46.352311    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.352342    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:46.352342    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:46.352382    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.416761    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:46.416761    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:46.469641    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:46.469641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:46.580672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:46.581191    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:46.581229    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:46.608166    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:46.608166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:49.162834    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:49.187402    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:49.219893    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.219893    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:49.223424    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:49.252338    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.252338    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:49.255900    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:49.286106    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.286131    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:49.289776    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:49.317141    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.317141    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:49.322761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:49.353605    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.353605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:49.357674    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:49.385747    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.385793    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:49.388757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:49.417812    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.417812    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:49.421500    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:49.452746    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.452797    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:49.452797    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:49.452797    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:49.516432    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:49.516432    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:49.553647    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:49.553647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:49.647049    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:49.647087    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:49.647087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:49.671889    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:49.671889    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:52.224199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:52.248067    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:52.282412    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.282412    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:52.286308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:52.315955    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.315955    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:52.319894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:52.353188    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.353188    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:52.356528    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:52.387579    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.387579    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:52.392336    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:52.421909    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.421909    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:52.425890    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:52.458902    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.458902    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:52.462430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:52.498067    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.498140    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:52.501354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:52.528125    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.528125    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:52.528125    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:52.528125    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:52.593845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:52.593845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:52.632779    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:52.632779    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:52.732902    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:52.732902    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:52.732902    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:52.762437    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:52.762437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.328400    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:55.355014    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:55.387364    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.387364    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:55.391085    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:55.417341    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.417341    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:55.421141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:55.450785    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.450785    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:55.454454    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:55.482484    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.482484    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:55.486100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:55.513682    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.513682    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:55.517291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:55.548548    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.548548    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:55.552971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:55.583380    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.583380    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:55.587471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:55.618619    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.618619    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:55.618619    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:55.618686    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:55.646962    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:55.646962    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.695480    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:55.695480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:55.757470    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:55.757470    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:55.796071    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:55.796071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:55.889833    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.396122    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:58.423573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:58.454757    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.454757    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:58.460430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:58.490597    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.490597    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:58.493832    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:58.523149    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.523149    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:58.526960    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:58.558649    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.558649    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:58.562228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:58.591400    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.591400    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:58.595569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:58.624162    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.624162    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:58.628070    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:58.660578    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.660578    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:58.664236    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:58.693155    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.693155    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:58.693155    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:58.693155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:58.732408    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:58.733409    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:58.823465    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.823465    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:58.823465    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:58.848772    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:58.848772    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:58.900567    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:58.900567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.465828    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:01.490385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:01.520316    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.520316    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:01.524299    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:01.555350    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.555350    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:01.559239    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:01.587077    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.587077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:01.591421    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:01.623853    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.623853    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:01.627746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:01.658165    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.658165    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:01.661588    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:01.703310    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.703310    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:01.709361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:01.740903    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.740903    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:01.744287    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:01.773431    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.773431    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:01.773431    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:01.773431    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:01.863541    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:01.863541    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:01.863541    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:01.891816    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:01.891816    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:01.936351    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:01.936351    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.997563    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:01.997563    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.541470    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:04.565886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:04.595881    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.595881    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:04.599716    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:04.629724    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.629749    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:04.633814    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:04.666020    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.666047    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:04.669510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:04.699730    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.699730    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:04.704016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:04.734540    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.734540    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:04.738414    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:04.765651    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.765651    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:04.769397    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:04.797315    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.797315    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:04.801409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:04.832845    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.832845    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:04.832845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:04.832845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.869617    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:04.869617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:04.958334    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:04.958334    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:04.958334    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:04.983497    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:04.983497    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:05.037861    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:05.037887    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.603239    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:07.626775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:07.655146    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.655146    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:07.658648    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:07.688192    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.688227    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:07.691749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:07.723836    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.723836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:07.727536    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:07.761238    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.761238    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:07.764987    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:07.792890    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.792890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:07.796847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:07.824734    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.824734    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:07.828821    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:07.859399    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.859399    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:07.862780    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:07.893406    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.893406    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:07.893457    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:07.893480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.954656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:07.954656    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:07.992200    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:07.993203    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:08.077979    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:08.077979    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:08.077979    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:08.102718    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:08.102718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:10.662101    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:10.688889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:10.721934    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.721996    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:10.727012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:10.760697    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.760746    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:10.763961    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:10.791222    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.791293    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:10.795121    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:10.826239    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.826317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:10.829753    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:10.857355    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.857355    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:10.861145    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:10.903922    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.903922    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:10.907990    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:10.937216    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.937281    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:10.940707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:10.969086    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.969086    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:10.969086    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:10.969238    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:11.062109    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:11.062109    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:11.062109    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:11.090185    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:11.090185    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:11.141444    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:11.141444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:11.199181    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:11.199181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:13.741347    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:13.766441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:13.800424    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.800424    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:13.805169    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:13.835040    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.835040    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:13.839295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:13.864861    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.866077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:13.869598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:13.898887    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.898887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:13.903167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:13.931208    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.931208    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:13.936649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:13.963722    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.963722    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:13.967474    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:13.998640    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.998640    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:14.002572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:14.031349    8452 logs.go:282] 0 containers: []
	W1216 06:25:14.031401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:14.031401    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:14.031401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:14.124587    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:14.124587    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:14.124714    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:14.153583    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:14.153583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:14.202636    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:14.202636    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:14.260591    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:14.260591    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:16.808603    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:16.833787    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:16.864300    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.864300    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:16.868592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:16.897549    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.897549    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:16.900917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:16.931516    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.931557    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:16.936698    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:16.965053    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.965053    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:16.969015    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:16.997017    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.997017    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:17.000551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:17.028733    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.028733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:17.032830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:17.062242    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.062242    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:17.066193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:17.096111    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.096186    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:17.096186    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:17.096243    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:17.126801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:17.126801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:17.178392    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:17.178392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:17.239223    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:17.239223    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:17.276363    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:17.277364    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:17.362910    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:19.869062    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:19.894371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:19.924915    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.924915    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:19.929351    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:19.956535    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.956535    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:19.960534    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:19.989334    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.989334    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:19.993202    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:20.021108    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.021108    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:20.025230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:20.054251    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.054251    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:20.057788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:20.088787    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.088860    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:20.092250    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:20.120577    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.120577    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:20.123857    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:20.153015    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.153015    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:20.153015    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:20.153015    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:20.241391    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:20.241391    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:20.241391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:20.267492    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:20.267554    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:20.321240    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:20.321880    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:20.384978    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:20.384978    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:22.926087    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:22.949774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:22.982854    8452 logs.go:282] 0 containers: []
	W1216 06:25:22.982854    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:22.986923    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:23.017638    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.017638    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:23.021130    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:23.052442    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.052667    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:23.058175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:23.085210    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.085210    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:23.089664    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:23.120747    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.120795    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:23.124581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:23.150600    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.150600    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:23.154602    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:23.182147    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.182147    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:23.185649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:23.217087    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.217087    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:23.217087    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:23.217087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:23.280619    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:23.280619    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:23.318090    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:23.318090    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:23.406270    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:23.406270    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:23.406270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:23.435128    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:23.435128    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:25.989934    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:26.012706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:26.043141    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.043141    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:26.047435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:26.075985    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.075985    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:26.079830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:26.110575    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.110575    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:26.113774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:26.144668    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.144668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:26.148428    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:26.175392    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.175392    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:26.179120    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:26.211067    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.211067    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:26.215072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:26.243555    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.243586    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:26.246934    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:26.279876    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.279876    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:26.279876    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:26.279876    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:26.387447    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:26.387488    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:26.387537    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:26.413896    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:26.413896    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:26.462318    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:26.462318    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:26.527832    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:26.527832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.072565    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:29.096390    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:29.127989    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.127989    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:29.131385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:29.158741    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.158741    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:29.162538    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:29.190346    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.190346    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:29.193798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:29.222234    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.222234    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:29.225740    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:29.252553    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.252553    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:29.256489    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:29.285679    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.285733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:29.289742    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:29.320841    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.321050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:29.324841    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:29.352461    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.352587    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:29.352615    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:29.352615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:29.419045    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:29.419045    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.457659    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:29.457659    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:29.544155    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:29.544155    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:29.544155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:29.571612    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:29.571646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:32.139910    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:32.164438    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:32.196526    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.196526    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:32.200231    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:32.226279    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.226279    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:32.230146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:32.257831    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.257831    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:32.262665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:32.293641    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.293641    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:32.297746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:32.327055    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.327055    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:32.331274    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:32.362206    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.362206    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:32.365146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:32.394600    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.394600    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:32.400058    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:32.428075    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.428075    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:32.428075    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:32.428075    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:32.491661    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:32.491661    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:32.528847    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:32.528847    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:32.616464    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:32.616464    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:32.616464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:32.642397    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:32.642397    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:35.191852    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:35.225285    8452 out.go:203] 
	W1216 06:25:35.227244    8452 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1216 06:25:35.227244    8452 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1216 06:25:35.227244    8452 out.go:285] * Related issues:
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1216 06:25:35.230096    8452 out.go:203] 
	
	
	==> Docker <==
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162855054Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162940064Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162949966Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162955666Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162961567Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.163040877Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.163140989Z" level=info msg="Initializing buildkit"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.281453678Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.293658962Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.293830383Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.293958199Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.294017906Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:19:30 newest-cni-256200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:19:31 newest-cni-256200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:19:31 newest-cni-256200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:47.919981   20240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:47.921517   20240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:47.922910   20240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:47.924024   20240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:47.925135   20240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.633501] CPU: 10 PID: 466820 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f865800db20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f865800daf6.
	[  +0.000001] RSP: 002b:00007ffc8c624780 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000033] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.839091] CPU: 12 PID: 466960 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fa6af131b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fa6af131af6.
	[  +0.000001] RSP: 002b:00007ffe97387e50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 06:22] tmpfs: Unknown parameter 'noswap'
	[  +9.428310] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 06:25:47 up  2:02,  0 user,  load average: 1.30, 3.26, 3.86
	Linux newest-cni-256200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:25:44 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:45 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 16 06:25:45 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:45 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:45 newest-cni-256200 kubelet[20050]: E1216 06:25:45.603313   20050 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:45 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:45 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:46 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 16 06:25:46 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:46 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:46 newest-cni-256200 kubelet[20078]: E1216 06:25:46.362500   20078 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:46 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:46 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:47 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 16 06:25:47 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:47 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:47 newest-cni-256200 kubelet[20106]: E1216 06:25:47.105838   20106 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:47 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:47 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:47 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
	Dec 16 06:25:47 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:47 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:47 newest-cni-256200 kubelet[20206]: E1216 06:25:47.850963   20206 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:47 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:47 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200: exit status 2 (597.3881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-256200" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-256200
helpers_test.go:244: (dbg) docker inspect newest-cni-256200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66",
	        "Created": "2025-12-16T06:09:14.512792797Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 436653,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:19:21.496573864Z",
	            "FinishedAt": "2025-12-16T06:19:16.313765237Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/hostname",
	        "HostsPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/hosts",
	        "LogPath": "/var/lib/docker/containers/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66/144d2cf5befbee810d50da8d64f6923091001fda697a209c419f92474281eb66-json.log",
	        "Name": "/newest-cni-256200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-256200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-256200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e81806120ca28b5cb113306ee9927764765e2b955e9f3b10b2f9f4ed5a3194c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-256200",
	                "Source": "/var/lib/docker/volumes/newest-cni-256200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-256200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-256200",
	                "name.minikube.sigs.k8s.io": "newest-cni-256200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e8e6d675d034626362ba9bfe3ff7eb692b71509157c5f340d1ebcb47d8e5bca3",
	            "SandboxKey": "/var/run/docker/netns/e8e6d675d034",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55872"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55868"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55869"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55870"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55871"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-256200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c97a08422fb6ea0a0f62c56d96c89be84aa4e33beba1ccaa82b7390e64b42c8e",
	                    "EndpointID": "fd51517b1d43bd1aa0aedcd49011763e39b0ec0911fbe06e3e82710415d585b2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-256200",
	                        "144d2cf5befb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200: exit status 2 (581.1417ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-256200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-256200 logs -n 25: (1.4259329s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-030800 sudo journalctl -xeu kubelet --all --full --no-pager          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/kubernetes/kubelet.conf                         │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status docker --all --full --no-pager          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat docker --no-pager                          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/docker/daemon.json                              │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo docker system info                                       │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat cri-docker --no-pager                      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cri-dockerd --version                                    │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status containerd --all --full --no-pager      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat containerd --no-pager                      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /lib/systemd/system/containerd.service               │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/containerd/config.toml                          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo containerd config dump                                   │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status crio --all --full --no-pager            │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat crio --no-pager                            │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo crio config                                              │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete  │ -p kubenet-030800                                                               │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image   │ newest-cni-256200 image list --format=json                                      │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ pause   │ -p newest-cni-256200 --alsologtostderr -v=1                                     │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ unpause │ -p newest-cni-256200 --alsologtostderr -v=1                                     │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:21:31
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:21:31.068463    4424 out.go:360] Setting OutFile to fd 1300 ...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.112163    4424 out.go:374] Setting ErrFile to fd 1224...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.126168    4424 out.go:368] Setting JSON to false
	I1216 06:21:31.128157    4424 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7112,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:21:31.129155    4424 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:21:31.133155    4424 out.go:179] * [kubenet-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:21:31.136368    4424 notify.go:221] Checking for updates...
	I1216 06:21:31.137751    4424 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:31.140914    4424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:21:31.143313    4424 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:21:31.145626    4424 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:21:31.147629    4424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:21:31.150478    4424 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151727    4424 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:21:31.272417    4424 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:21:31.275875    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.534539    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.516919297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.537553    4424 out.go:179] * Using the docker driver based on user configuration
	I1216 06:21:31.541211    4424 start.go:309] selected driver: docker
	I1216 06:21:31.541254    4424 start.go:927] validating driver "docker" against <nil>
	I1216 06:21:31.541286    4424 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:21:31.597589    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.842240    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.823958826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.842240    4424 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:21:31.843240    4424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:31.846236    4424 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:21:31.848222    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:21:31.848222    4424 start.go:353] cluster config:
	{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:21:31.851222    4424 out.go:179] * Starting "kubenet-030800" primary control-plane node in "kubenet-030800" cluster
	I1216 06:21:31.860233    4424 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:21:31.863229    4424 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:21:31.866228    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:31.866228    4424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:21:31.866228    4424 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:21:31.866228    4424 cache.go:65] Caching tarball of preloaded images
	I1216 06:21:31.866228    4424 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:21:31.866228    4424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:21:31.866228    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:31.866228    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json: {Name:mkd9bbe5249f898d86f7b7f3961735d2ed71d636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:31.935458    4424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:21:31.935458    4424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:21:31.935988    4424 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:21:31.936042    4424 start.go:360] acquireMachinesLock for kubenet-030800: {Name:mka6ae821c9ad8ee62e1a8eef0f2acffe81ebe64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:21:31.936202    4424 start.go:364] duration metric: took 160.2µs to acquireMachinesLock for "kubenet-030800"
	I1216 06:21:31.936352    4424 start.go:93] Provisioning new machine with config: &{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:31.936477    4424 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:21:30.055522    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:30.078529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:30.109519    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.109519    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:30.112511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:30.144765    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.144765    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:30.149313    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:30.181852    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.181852    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:30.184838    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:30.217464    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.217504    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:30.221332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:30.249301    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.249301    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:30.252441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:30.278998    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.278998    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:30.281997    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:30.312414    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.312414    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:30.318242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:30.357304    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.357361    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:30.357422    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:30.357422    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:30.746929    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:30.746929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:30.795665    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:30.795746    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:30.878691    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:30.878691    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:30.913679    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:30.913679    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:31.010602    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:33.516463    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:33.569431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:33.607164    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.607164    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:33.610177    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:33.645795    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.645795    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:33.649795    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:33.678786    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.678786    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:33.681789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:33.712090    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.712090    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:33.716091    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:33.749207    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.749207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:33.753072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:33.787281    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.787317    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:33.790849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:33.822451    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.822451    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:33.828957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:33.859827    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.859881    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:33.859881    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:33.859929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:33.922584    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:33.922584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:34.010640    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:34.010640    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:34.010640    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:34.047937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:34.047937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:34.108035    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:34.108113    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:31.939854    4424 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:21:31.939854    4424 start.go:159] libmachine.API.Create for "kubenet-030800" (driver="docker")
	I1216 06:21:31.939854    4424 client.go:173] LocalClient.Create starting
	I1216 06:21:31.940866    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.946190    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:21:32.002258    4424 cli_runner.go:211] docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:21:32.006251    4424 network_create.go:284] running [docker network inspect kubenet-030800] to gather additional debugging logs...
	I1216 06:21:32.006251    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800
	W1216 06:21:32.057274    4424 cli_runner.go:211] docker network inspect kubenet-030800 returned with exit code 1
	I1216 06:21:32.057274    4424 network_create.go:287] error running [docker network inspect kubenet-030800]: docker network inspect kubenet-030800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-030800 not found
	I1216 06:21:32.057274    4424 network_create.go:289] output of [docker network inspect kubenet-030800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-030800 not found
	
	** /stderr **
	I1216 06:21:32.061267    4424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:21:32.137401    4424 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.168856    4424 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.184860    4424 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.200856    4424 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.216426    4424 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.232146    4424 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016d96b0}
	I1216 06:21:32.232146    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 06:21:32.235443    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	W1216 06:21:32.288644    4424 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800 returned with exit code 1
	W1216 06:21:32.288644    4424 network_create.go:149] failed to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1216 06:21:32.288644    4424 network_create.go:116] failed to create docker network kubenet-030800 192.168.94.0/24, will retry: subnet is taken
	I1216 06:21:32.308048    4424 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.321168    4424 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015f57d0}
	I1216 06:21:32.321265    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1216 06:21:32.325637    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	I1216 06:21:32.469323    4424 network_create.go:108] docker network kubenet-030800 192.168.103.0/24 created
	I1216 06:21:32.469323    4424 kic.go:121] calculated static IP "192.168.103.2" for the "kubenet-030800" container
	I1216 06:21:32.483125    4424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:21:32.541557    4424 cli_runner.go:164] Run: docker volume create kubenet-030800 --label name.minikube.sigs.k8s.io=kubenet-030800 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:21:32.608360    4424 oci.go:103] Successfully created a docker volume kubenet-030800
	I1216 06:21:32.611360    4424 cli_runner.go:164] Run: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:21:34.117036    4424 cli_runner.go:217] Completed: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.5056549s)
	I1216 06:21:34.117036    4424 oci.go:107] Successfully prepared a docker volume kubenet-030800
	I1216 06:21:34.117036    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:34.117036    4424 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:21:34.121793    4424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:21:37.760556    7800 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:21:37.760556    7800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:21:37.761189    7800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:21:37.761753    7800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:21:37.761881    7800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:21:37.761881    7800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:21:37.764442    7800 out.go:252]   - Generating certificates and keys ...
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:21:37.765188    7800 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:21:37.765955    7800 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:21:37.766018    7800 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:21:37.766124    7800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:21:37.766165    7800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:21:37.766271    7800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:21:37.766333    7800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:21:37.766397    7800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:21:37.766458    7800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:21:37.770151    7800 out.go:252]   - Booting up control plane ...
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:21:37.770817    7800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:21:37.770952    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:21:37.771091    7800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:21:37.771167    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:21:37.771225    7800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:21:37.771366    7800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.004327208s
	I1216 06:21:37.771902    7800 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:21:37.772247    7800 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1216 06:21:37.772484    7800 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:21:37.772735    7800 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:21:37.773067    7800 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.101943404s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.591910767s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002177662s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:21:37.773799    7800 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:21:37.773799    7800 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:21:37.774455    7800 kubeadm.go:319] [mark-control-plane] Marking the node bridge-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:21:37.774523    7800 kubeadm.go:319] [bootstrap-token] Using token: lrkd8c.ky3vlqagn7chac73
	I1216 06:21:37.777890    7800 out.go:252]   - Configuring RBAC rules ...
	I1216 06:21:37.777890    7800 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:21:37.779666    7800 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:21:37.780278    7800 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:21:37.780278    7800 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:21:37.781243    7800 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--control-plane 
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:21:37.782257    7800 cni.go:84] Creating CNI manager for "bridge"
	I1216 06:21:37.785969    7800 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W1216 06:21:35.013402    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:37.791788    7800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 06:21:37.806804    7800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 06:21:37.825807    7800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-030800 minikube.k8s.io/updated_at=2025_12_16T06_21_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=bridge-030800 minikube.k8s.io/primary=true
	I1216 06:21:37.839814    7800 ops.go:34] apiserver oom_adj: -16
	I1216 06:21:38.032186    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:38.534048    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.035804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.534294    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:36.686452    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:36.704466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:36.733394    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.733394    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:36.737348    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:36.773510    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.773510    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:36.776509    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:36.805498    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.805498    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:36.809501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:36.845499    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.845499    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:36.849511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:36.879646    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.879646    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:36.884108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:36.915408    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.915408    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:36.920279    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:36.952754    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.952754    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:36.955734    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:36.987884    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.987884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:36.987884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:36.987884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:37.053646    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:37.053646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:37.093881    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:37.093881    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:37.179527    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:37.179584    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:37.179628    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:37.206261    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:37.206297    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:39.778747    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:39.802598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:39.832179    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.832179    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:39.836704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:39.869121    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.869121    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:39.873774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:39.909668    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.909668    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:39.914691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:39.947830    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.947830    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:39.951594    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:40.034177    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:40.535099    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.034558    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.535126    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.034691    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.533593    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.035143    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.831113    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:44.554108    7800 kubeadm.go:1114] duration metric: took 6.7282073s to wait for elevateKubeSystemPrivileges
	I1216 06:21:44.554108    7800 kubeadm.go:403] duration metric: took 23.3439157s to StartCluster
	I1216 06:21:44.554108    7800 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.554108    7800 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:44.555899    7800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.557179    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:21:44.557179    7800 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:44.557179    7800 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:21:44.557179    7800 addons.go:70] Setting storage-provisioner=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:239] Setting addon storage-provisioner=true in "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:70] Setting default-storageclass=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 host.go:66] Checking if "bridge-030800" exists ...
	I1216 06:21:44.557179    7800 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-030800"
	I1216 06:21:44.557179    7800 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.910438    7800 out.go:179] * Verifying Kubernetes components...
	I1216 06:21:39.982557    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.982557    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:39.986642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:40.018169    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.018169    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:40.021165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:40.051243    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.051243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:40.057090    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:40.084414    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.084414    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:40.084414    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:40.084414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:40.144414    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:40.144414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:40.179632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:40.179632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:40.269800    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:40.270323    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:40.270323    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:40.298399    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:40.298399    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:42.860131    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:42.883733    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:42.914204    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.914204    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:42.918228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:42.950380    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.950459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:42.954335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:42.985562    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.985562    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:42.988741    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:43.016773    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.016773    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:43.020957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:43.059042    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.059042    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:43.062546    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:43.091529    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.091529    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:43.095547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:43.122188    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.122188    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:43.126264    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:43.154603    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.154667    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:43.154687    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:43.154709    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:43.217437    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:43.217437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:43.256550    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:43.256550    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:43.334672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:43.334672    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:43.334672    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:43.363324    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:43.363324    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:44.625758    7800 addons.go:239] Setting addon default-storageclass=true in "bridge-030800"
	I1216 06:21:44.961765    7800 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:21:44.962159    7800 host.go:66] Checking if "bridge-030800" exists ...
	W1216 06:21:45.051007    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:45.413866    7800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:45.416342    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:45.428762    7800 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.428762    7800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:21:45.433231    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.481472    7800 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:45.481472    7800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:21:45.485567    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.487870    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.534738    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:21:45.540734    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.651776    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.743561    7800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:21:45.947134    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:48.661269    7800 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.1264885s)
	I1216 06:21:48.661269    7800 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.2776091s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.1858261s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.9822555s)
	I1216 06:21:48.933443    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:48.974829    7800 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:21:48.977844    7800 addons.go:530] duration metric: took 4.4206041s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:21:48.994296    7800 node_ready.go:35] waiting up to 15m0s for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 node_ready.go:49] node "bridge-030800" is "Ready"
	I1216 06:21:49.024312    7800 node_ready.go:38] duration metric: took 30.0163ms for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:21:49.030307    7800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.051593    7800 api_server.go:72] duration metric: took 4.4943521s to wait for apiserver process to appear ...
	I1216 06:21:49.051593    7800 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:21:49.051593    7800 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56268/healthz ...
	I1216 06:21:49.061499    7800 api_server.go:279] https://127.0.0.1:56268/healthz returned 200:
	ok
	I1216 06:21:49.063514    7800 api_server.go:141] control plane version: v1.34.2
	I1216 06:21:49.063514    7800 api_server.go:131] duration metric: took 11.9204ms to wait for apiserver health ...
	I1216 06:21:49.064510    7800 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:21:49.088115    7800 system_pods.go:59] 8 kube-system pods found
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.088115    7800 system_pods.go:74] duration metric: took 23.6038ms to wait for pod list to return data ...
	I1216 06:21:49.088115    7800 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:21:49.094110    7800 default_sa.go:45] found service account: "default"
	I1216 06:21:49.094110    7800 default_sa.go:55] duration metric: took 5.9949ms for default service account to be created ...
	I1216 06:21:49.094110    7800 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:21:49.100097    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.100097    7800 retry.go:31] will retry after 202.33386ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.170358    7800 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-030800" context rescaled to 1 replicas
	I1216 06:21:49.310950    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.310950    7800 retry.go:31] will retry after 302.122926ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.630338    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630577    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.630663    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.630695    7800 retry.go:31] will retry after 447.973015ms: missing components: kube-dns, kube-proxy
	I1216 06:21:45.929428    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:45.950755    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:45.982605    8452 logs.go:282] 0 containers: []
	W1216 06:21:45.982605    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:45.987711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:46.020649    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.020649    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:46.024931    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:46.058836    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.058836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:46.066651    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:46.094860    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.094860    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:46.098689    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:46.127246    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.127246    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:46.130937    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:46.159519    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.159519    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:46.163609    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:46.195483    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.195483    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:46.199178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:46.229349    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.229349    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:46.229349    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:46.229349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:46.292392    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:46.292392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:46.330903    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:46.330903    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:46.427283    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:46.427283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:46.427283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:46.458647    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:46.458647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:49.005293    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.027308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:49.061499    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.061499    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:49.064510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:49.094110    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.094110    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:49.097105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:49.126749    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.126749    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:49.132748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:49.169344    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.169344    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:49.174332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:49.209103    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.209103    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:49.214581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:49.248644    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.248644    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:49.251643    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:49.281632    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.281632    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:49.285645    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:49.316955    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.316955    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:49.316955    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:49.317942    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:49.391656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:49.391738    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:49.432724    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:49.432724    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:49.523989    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:49.523989    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:49.523989    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:49.552004    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:49.552004    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:48.467044    4424 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (14.3450525s)
	I1216 06:21:48.467044    4424 kic.go:203] duration metric: took 14.349809s to extract preloaded images to volume ...
	I1216 06:21:48.470844    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:48.730876    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:48.710057733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:48.733867    4424 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:21:48.983392    4424 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-030800 --name kubenet-030800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-030800 --network kubenet-030800 --ip 192.168.103.2 --volume kubenet-030800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:21:49.764686    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Running}}
	I1216 06:21:49.828590    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:49.890595    4424 cli_runner.go:164] Run: docker exec kubenet-030800 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:21:50.004225    4424 oci.go:144] the created container "kubenet-030800" has a running status.
	I1216 06:21:50.005228    4424 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.057161    4424 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:21:50.141101    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:50.207656    4424 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:21:50.207656    4424 kic_runner.go:114] Args: [docker exec --privileged kubenet-030800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:21:50.326664    4424 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.087090    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.087090    7800 retry.go:31] will retry after 426.637768ms: missing components: kube-dns, kube-proxy
	I1216 06:21:50.538640    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.538640    7800 retry.go:31] will retry after 479.139187ms: missing components: kube-dns
	I1216 06:21:51.025065    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.025065    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:51.025193    7800 retry.go:31] will retry after 758.159415ms: missing components: kube-dns
	I1216 06:21:51.791088    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Running
	I1216 06:21:51.791088    7800 system_pods.go:126] duration metric: took 2.6969413s to wait for k8s-apps to be running ...
	I1216 06:21:51.791088    7800 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:21:51.798336    7800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:21:51.818183    7800 system_svc.go:56] duration metric: took 27.0943ms WaitForService to wait for kubelet
	I1216 06:21:51.818183    7800 kubeadm.go:587] duration metric: took 7.2609035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:51.818183    7800 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:21:51.825244    7800 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:21:51.825244    7800 node_conditions.go:123] node cpu capacity is 16
	I1216 06:21:51.825244    7800 node_conditions.go:105] duration metric: took 7.0607ms to run NodePressure ...
	I1216 06:21:51.825244    7800 start.go:242] waiting for startup goroutines ...
	I1216 06:21:51.825244    7800 start.go:247] waiting for cluster config update ...
	I1216 06:21:51.825244    7800 start.go:256] writing updated cluster config ...
	I1216 06:21:51.833706    7800 ssh_runner.go:195] Run: rm -f paused
	I1216 06:21:51.841597    7800 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:21:51.851622    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:21:53.862268    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:52.109148    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:52.140748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:52.186855    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.186855    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:52.193111    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:52.227511    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.227511    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:52.232508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:52.265331    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.265331    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:52.270635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:52.301130    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.301130    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:52.307669    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:52.342623    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.342623    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:52.347794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:52.387246    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.387246    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:52.392629    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:52.445055    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.445143    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:52.449252    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:52.480071    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.480071    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:52.480071    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:52.480071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:52.547115    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:52.547115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:52.590630    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:52.590630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:52.690412    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:52.690412    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:52.690412    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:52.718573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:52.718573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:52.546527    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:52.603159    4424 machine.go:94] provisionDockerMachine start ...
	I1216 06:21:52.606161    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.662674    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.679442    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.679519    4424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:21:52.842464    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:52.842464    4424 ubuntu.go:182] provisioning hostname "kubenet-030800"
	I1216 06:21:52.846473    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.908771    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.908771    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.908771    4424 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-030800 && echo "kubenet-030800" | sudo tee /etc/hostname
	I1216 06:21:53.084692    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:53.088917    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.150284    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.150284    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.150284    4424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-030800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-030800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-030800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:21:53.322772    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:21:53.322772    4424 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:21:53.322772    4424 ubuntu.go:190] setting up certificates
	I1216 06:21:53.322772    4424 provision.go:84] configureAuth start
	I1216 06:21:53.326658    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:53.379472    4424 provision.go:143] copyHostCerts
	I1216 06:21:53.379472    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:21:53.379472    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:21:53.379472    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:21:53.381506    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:21:53.381506    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:21:53.382025    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:21:53.383238    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:21:53.383286    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:21:53.383622    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:21:53.384729    4424 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-030800 san=[127.0.0.1 192.168.103.2 kubenet-030800 localhost minikube]
	I1216 06:21:53.446404    4424 provision.go:177] copyRemoteCerts
	I1216 06:21:53.450578    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:21:53.453632    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.508049    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:53.625841    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:21:53.652177    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 06:21:53.678648    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:21:53.702593    4424 provision.go:87] duration metric: took 379.8156ms to configureAuth
	I1216 06:21:53.702593    4424 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:21:53.703116    4424 config.go:182] Loaded profile config "kubenet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:53.706020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.763080    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.763659    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.763659    4424 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:21:53.941197    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:21:53.941229    4424 ubuntu.go:71] root file system type: overlay
	I1216 06:21:53.941395    4424 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:21:53.945310    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.000318    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.000318    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.000318    4424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:21:54.194977    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:21:54.198986    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.262183    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.262873    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.262912    4424 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:21:55.764091    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:21:54.174803160 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:21:55.764091    4424 machine.go:97] duration metric: took 3.1608879s to provisionDockerMachine
	I1216 06:21:55.764091    4424 client.go:176] duration metric: took 23.8239056s to LocalClient.Create
	I1216 06:21:55.764091    4424 start.go:167] duration metric: took 23.8239056s to libmachine.API.Create "kubenet-030800"
	I1216 06:21:55.764091    4424 start.go:293] postStartSetup for "kubenet-030800" (driver="docker")
	I1216 06:21:55.764091    4424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:21:55.769330    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:21:55.774020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:55.832721    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:55.960433    4424 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:21:55.968801    4424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:21:55.968801    4424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:21:55.969505    4424 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:21:55.973822    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:21:55.985938    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:21:56.011522    4424 start.go:296] duration metric: took 247.4281ms for postStartSetup
	I1216 06:21:56.016962    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.071317    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:56.078704    4424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:21:56.082131    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	W1216 06:21:55.087921    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:56.146380    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.278810    4424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:21:56.289463    4424 start.go:128] duration metric: took 24.3526481s to createHost
	I1216 06:21:56.289463    4424 start.go:83] releasing machines lock for "kubenet-030800", held for 24.352923s
	I1216 06:21:56.293770    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.349762    4424 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:21:56.354527    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.355718    4424 ssh_runner.go:195] Run: cat /version.json
	I1216 06:21:56.359207    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.419217    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.420010    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.548149    4424 ssh_runner.go:195] Run: systemctl --version
	W1216 06:21:56.549226    4424 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:21:56.567514    4424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:21:56.574755    4424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:21:56.580435    4424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:21:56.633416    4424 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:21:56.633416    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:56.633416    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:56.633416    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:56.657618    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1216 06:21:56.658090    4424 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:21:56.658134    4424 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:21:56.678200    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:21:56.690681    4424 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:21:56.695430    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:21:56.714310    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.735757    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:21:56.754647    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.771876    4424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:21:56.790078    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:21:56.810936    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:21:56.828529    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:21:56.859717    4424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:21:56.876724    4424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:21:56.891719    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.036224    4424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:21:57.185425    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:57.185522    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:57.190092    4424 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:21:57.213249    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.239566    4424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:21:57.303231    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.326154    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:21:57.344861    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:57.372889    4424 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:21:57.386009    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:21:57.401220    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1216 06:21:57.422607    4424 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:21:57.590920    4424 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:21:57.727211    4424 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:21:57.727211    4424 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:21:57.751771    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:21:57.772961    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.912458    4424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:21:58.834645    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:21:58.856232    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:21:58.880727    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:58.906712    4424 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:21:59.052553    4424 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:21:59.194941    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.333924    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:21:59.357147    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:21:59.379570    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.513788    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:21:59.631489    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:59.649336    4424 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:21:59.653752    4424 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:21:59.660755    4424 start.go:564] Will wait 60s for crictl version
	I1216 06:21:59.665368    4424 ssh_runner.go:195] Run: which crictl
	I1216 06:21:59.677200    4424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:21:59.717428    4424 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:21:59.720622    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:21:59.765567    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	W1216 06:21:55.865199    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	W1216 06:21:58.365962    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:55.273773    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:55.297441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:55.334351    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.334404    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:55.338338    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:55.372344    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.372344    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:55.375335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:55.429711    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.429711    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:55.432707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:55.463415    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.463415    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:55.466882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:55.495871    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.495871    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:55.499782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:55.530135    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.530135    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:55.534032    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:55.561956    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.561956    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:55.567456    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:55.598684    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.598684    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:55.598684    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:55.598684    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:55.661553    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:55.661553    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:55.699330    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:55.699330    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:55.806271    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:55.806271    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:55.806271    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:55.839937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:55.839937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.400590    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:58.422580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:58.453081    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.453081    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:58.457176    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:58.482739    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.482739    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:58.486288    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:58.516198    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.516198    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:58.520374    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:58.550134    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.550134    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:58.553679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:58.585815    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.585815    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:58.589532    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:58.620180    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.620180    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:58.626021    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:58.656946    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.656946    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:58.659942    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:58.687618    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.687618    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:58.687618    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:58.687618    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:58.777493    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:58.777493    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:58.777493    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:58.805676    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:58.805676    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.860391    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:58.860391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:58.925444    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:58.925444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:59.807579    4424 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:21:59.810667    4424 cli_runner.go:164] Run: docker exec -t kubenet-030800 dig +short host.docker.internal
	I1216 06:21:59.962844    4424 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:21:59.967733    4424 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:21:59.974503    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:21:59.995371    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:00.053937    4424 kubeadm.go:884] updating cluster {Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:22:00.053937    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:22:00.057874    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.094105    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.094105    4424 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:22:00.097332    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.129189    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.129225    4424 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:22:00.129280    4424 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 docker true true} ...
	I1216 06:22:00.129486    4424 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:22:00.132350    4424 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:22:00.208072    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:22:00.208072    4424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:22:00.208072    4424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-030800 NodeName:kubenet-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:22:00.208072    4424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:22:00.213204    4424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:22:00.225061    4424 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:22:00.229012    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:22:00.242127    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (339 bytes)
	I1216 06:22:00.258591    4424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:22:00.278876    4424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 06:22:00.305788    4424 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:22:00.315868    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:22:00.339710    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:22:00.483171    4424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:22:00.505844    4424 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800 for IP: 192.168.103.2
	I1216 06:22:00.505844    4424 certs.go:195] generating shared ca certs ...
	I1216 06:22:00.505844    4424 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.506501    4424 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:22:00.507023    4424 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:22:00.507484    4424 certs.go:257] generating profile certs ...
	I1216 06:22:00.507484    4424 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key
	I1216 06:22:00.507484    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt with IP's: []
	I1216 06:22:00.552695    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt ...
	I1216 06:22:00.552695    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt: {Name:mk4783bd7e1619c0ea341eaca75005ddd88d5b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.553960    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key ...
	I1216 06:22:00.553960    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key: {Name:mk427571c1896a50b896e76c58a633b5512ad44e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.555335    4424 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8
	I1216 06:22:00.555661    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 06:22:00.581299    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 ...
	I1216 06:22:00.581299    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8: {Name:mk9cb22362f0ba7f5c0b5c6877c5c2e8d72eb278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.582304    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 ...
	I1216 06:22:00.582304    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8: {Name:mk2a3e21d232de7f748cffa074c96be0850cc9f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.583303    4424 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt
	I1216 06:22:00.599920    4424 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key
	I1216 06:22:00.600703    4424 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key
	I1216 06:22:00.601353    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt with IP's: []
	I1216 06:22:00.664564    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt ...
	I1216 06:22:00.664564    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt: {Name:mk02eb62f20a18ad60f930ae30a248a87b7cb658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.665010    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key ...
	I1216 06:22:00.665010    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key: {Name:mk8a8b2a6c6b1b3e2e2cc574e01303d6680bf793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.680006    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:22:00.680554    4424 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:22:00.680554    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:22:00.681404    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:22:00.683052    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:22:00.710388    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:22:00.737370    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:22:00.766290    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:22:00.790943    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:22:00.815072    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:22:00.839330    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:22:00.863340    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:22:00.921806    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:22:00.945068    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:22:00.972351    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:22:00.998813    4424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:22:01.025404    4424 ssh_runner.go:195] Run: openssl version
	I1216 06:22:01.039534    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.056142    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:22:01.077227    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.085140    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.089133    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	W1216 06:22:03.871305    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 06:22:03.871305    2100 node_ready.go:38] duration metric: took 6m0.0002926s for node "no-preload-686300" to be "Ready" ...
	I1216 06:22:03.874339    2100 out.go:203] 
	W1216 06:22:03.877050    2100 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:22:03.877050    2100 out.go:285] * 
	W1216 06:22:03.879403    2100 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:22:03.881203    2100 out.go:203] 
	W1216 06:22:00.861344    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:22:01.860562    7800 pod_ready.go:99] pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8s6v4" not found
	I1216 06:22:01.860562    7800 pod_ready.go:86] duration metric: took 10.0087717s for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:01.860562    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tcbrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:03.875170    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:01.467725    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:01.493737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:01.526377    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.526426    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:01.530527    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:01.563582    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.563582    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:01.567007    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:01.606460    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.606460    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:01.610155    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:01.647965    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.647965    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:01.652309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:01.691011    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.691011    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:01.695466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:01.736509    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.736509    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:01.739991    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:01.773642    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.773642    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:01.777617    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:01.811025    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.811141    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:01.811141    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:01.811141    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:01.881533    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:01.881533    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.919632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:01.919632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:02.020491    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:02.020538    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:02.020587    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:02.059933    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:02.060031    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:04.620893    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:04.645761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:04.680596    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.680596    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:04.684611    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:04.712607    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.712607    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:04.716737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:04.744218    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.744218    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:04.748501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:04.787600    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.787668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:04.791349    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:04.825500    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.825547    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:04.829098    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:04.878465    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.878465    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:04.881466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:04.910168    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.910168    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:04.914167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:04.949752    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.949810    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:04.949810    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:04.949873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.143585    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:22:01.161031    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:22:01.179456    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.197251    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:22:01.216028    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.226660    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.230697    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.278644    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:22:01.297647    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:22:01.317326    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.341360    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:22:01.367643    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.377139    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.383754    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.440843    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:22:01.457977    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:22:01.476683    4424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:22:01.483599    4424 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:22:01.484303    4424 kubeadm.go:401] StartCluster: {Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:22:01.490132    4424 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:22:01.529050    4424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:22:01.545461    4424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:22:01.559986    4424 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:22:01.564509    4424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:22:01.575681    4424 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:22:01.575681    4424 kubeadm.go:158] found existing configuration files:
	
	I1216 06:22:01.581349    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:22:01.593595    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:22:01.599386    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:22:01.618969    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:22:01.633516    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:22:01.638266    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:22:01.656598    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:22:01.670398    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:22:01.674972    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:22:01.695466    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:22:01.709055    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:22:01.713665    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:22:01.733357    4424 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:22:01.884136    4424 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:22:01.891445    4424 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:22:01.994223    4424 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 06:22:06.379758    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	W1216 06:22:08.874715    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:04.987656    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:04.987703    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:05.093013    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:05.093013    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:05.093013    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:05.148503    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:05.148503    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:05.222357    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:05.222357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:07.791130    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:07.816699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:07.846890    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.846890    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:07.850551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:07.885179    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.885179    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:07.889622    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:07.920925    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.920925    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:07.925517    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:07.955043    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.955043    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:07.959825    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:07.988928    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.988928    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:07.993735    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:08.025335    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.025335    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:08.031801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:08.063231    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.063231    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:08.068525    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:08.106217    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.106217    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:08.106217    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:08.106217    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:08.173411    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:08.173411    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:08.241764    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:08.241764    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:08.282741    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:08.282741    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:08.376141    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:08.376181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:08.376246    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:22:10.875960    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	W1216 06:22:13.371029    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:13.873624    7800 pod_ready.go:94] pod "coredns-66bc5c9577-tcbrk" is "Ready"
	I1216 06:22:13.873624    7800 pod_ready.go:86] duration metric: took 12.0128951s for pod "coredns-66bc5c9577-tcbrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.879094    7800 pod_ready.go:83] waiting for pod "etcd-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.889705    7800 pod_ready.go:94] pod "etcd-bridge-030800" is "Ready"
	I1216 06:22:13.889705    7800 pod_ready.go:86] duration metric: took 10.6111ms for pod "etcd-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.893578    7800 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.912416    7800 pod_ready.go:94] pod "kube-apiserver-bridge-030800" is "Ready"
	I1216 06:22:13.912416    7800 pod_ready.go:86] duration metric: took 18.8376ms for pod "kube-apiserver-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.917120    7800 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.068093    7800 pod_ready.go:94] pod "kube-controller-manager-bridge-030800" is "Ready"
	I1216 06:22:14.068093    7800 pod_ready.go:86] duration metric: took 150.9707ms for pod "kube-controller-manager-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.266154    7800 pod_ready.go:83] waiting for pod "kube-proxy-pbfkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.666596    7800 pod_ready.go:94] pod "kube-proxy-pbfkb" is "Ready"
	I1216 06:22:14.666596    7800 pod_ready.go:86] duration metric: took 400.436ms for pod "kube-proxy-pbfkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:10.906574    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:10.929977    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:10.963006    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.963006    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:10.966334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:10.995517    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.995517    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:10.998887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:11.027737    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.027771    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:11.034529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:11.070221    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.070221    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:11.075447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:11.105575    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.105575    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:11.108569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:11.143549    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.143549    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:11.146562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:11.178034    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.178034    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:11.181411    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:11.211522    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.211522    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:11.211522    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:11.211522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:11.244289    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:11.244289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:11.295870    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:11.295870    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:11.359418    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:11.360418    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:11.394416    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:11.394416    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:11.489247    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:13.994214    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:14.016691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:14.049641    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.049641    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:14.053607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:14.088893    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.088893    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:14.092847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:14.131857    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.131857    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:14.135845    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:14.168503    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.168503    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:14.172477    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:14.200948    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.200948    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:14.204642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:14.234975    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.234975    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:14.238802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:14.274052    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.274107    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:14.277642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:14.306199    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.306199    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:14.306199    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:14.306199    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:14.374972    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:14.374972    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:14.411356    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:14.411356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:14.498252    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:14.498283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:14.498283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:14.528112    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:14.528112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:14.872200    7800 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:15.267078    7800 pod_ready.go:94] pod "kube-scheduler-bridge-030800" is "Ready"
	I1216 06:22:15.267078    7800 pod_ready.go:86] duration metric: took 394.8723ms for pod "kube-scheduler-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:15.267078    7800 pod_ready.go:40] duration metric: took 23.4251556s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:15.362849    7800 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:22:15.367720    7800 out.go:179] * Done! kubectl is now configured to use "bridge-030800" cluster and "default" namespace by default
	I1216 06:22:17.092050    4424 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:22:17.093065    4424 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:22:17.093065    4424 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:22:17.096059    4424 out.go:252]   - Generating certificates and keys ...
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:22:17.098050    4424 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:22:17.099055    4424 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:22:17.099055    4424 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:22:17.102055    4424 out.go:252]   - Booting up control plane ...
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:22:17.104058    4424 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:22:17.104058    4424 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.507351804s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.957344338s
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.90080548s
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002224001s
	I1216 06:22:17.106067    4424 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:22:17.106067    4424 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:22:17.106067    4424 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:22:17.107057    4424 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:22:17.107057    4424 kubeadm.go:319] [bootstrap-token] Using token: rs8etp.b2dh1vgtia9jcvb4
	I1216 06:22:17.081041    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:17.103056    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:17.137059    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.137059    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:17.141064    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:17.172640    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.172640    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:17.176638    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:17.210910    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.210910    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:17.215347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:17.248986    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.248986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:17.252989    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:17.287415    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.287415    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:17.293572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:17.324098    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.324098    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:17.330062    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:17.366512    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.366512    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:17.370101    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:17.402400    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.402400    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:17.402400    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:17.402400    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:17.455027    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:17.455027    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:17.513029    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:17.513029    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:17.548022    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:17.548022    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:17.645629    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:17.645629    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:17.645629    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:17.110053    4424 out.go:252]   - Configuring RBAC rules ...
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:22:17.111060    4424 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:22:17.111060    4424 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:22:17.111060    4424 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:22:17.111060    4424 kubeadm.go:319] 
	I1216 06:22:17.111060    4424 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:22:17.111060    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:22:17.112052    4424 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:22:17.112052    4424 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:22:17.113053    4424 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:22:17.113053    4424 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:22:17.113053    4424 kubeadm.go:319] 
	I1216 06:22:17.113053    4424 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:22:17.113053    4424 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:22:17.113053    4424 kubeadm.go:319] 
	I1216 06:22:17.113053    4424 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rs8etp.b2dh1vgtia9jcvb4 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--control-plane 
	I1216 06:22:17.114052    4424 kubeadm.go:319] 
	I1216 06:22:17.114052    4424 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:22:17.114052    4424 kubeadm.go:319] 
	I1216 06:22:17.114052    4424 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rs8etp.b2dh1vgtia9jcvb4 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:22:17.114052    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:22:17.114052    4424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:22:17.122049    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:17.122049    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-030800 minikube.k8s.io/updated_at=2025_12_16T06_22_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=kubenet-030800 minikube.k8s.io/primary=true
	I1216 06:22:17.134054    4424 ops.go:34] apiserver oom_adj: -16
	I1216 06:22:17.253989    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:17.753536    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:18.254825    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:18.755186    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:19.255440    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:19.754492    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:20.256463    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:20.753254    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.253896    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.753097    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.858877    4424 kubeadm.go:1114] duration metric: took 4.7437541s to wait for elevateKubeSystemPrivileges
	I1216 06:22:21.858877    4424 kubeadm.go:403] duration metric: took 20.3742909s to StartCluster
	I1216 06:22:21.858877    4424 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:21.858877    4424 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:22:21.861003    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:21.861972    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:22:21.861972    4424 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:22:21.861972    4424 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:22:21.861972    4424 addons.go:70] Setting storage-provisioner=true in profile "kubenet-030800"
	I1216 06:22:21.861972    4424 addons.go:239] Setting addon storage-provisioner=true in "kubenet-030800"
	I1216 06:22:21.861972    4424 addons.go:70] Setting default-storageclass=true in profile "kubenet-030800"
	I1216 06:22:21.861972    4424 config.go:182] Loaded profile config "kubenet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:22:21.861972    4424 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-030800"
	I1216 06:22:21.861972    4424 host.go:66] Checking if "kubenet-030800" exists ...
	I1216 06:22:21.864167    4424 out.go:179] * Verifying Kubernetes components...
	I1216 06:22:21.875224    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:21.875857    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:21.875857    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:22:21.939068    4424 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:22:21.939740    4424 addons.go:239] Setting addon default-storageclass=true in "kubenet-030800"
	I1216 06:22:21.939740    4424 host.go:66] Checking if "kubenet-030800" exists ...
	I1216 06:22:21.942493    4424 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:22:21.942493    4424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:22:21.947611    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:21.951961    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:22.001257    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:22:22.003241    4424 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:22:22.003241    4424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:22:22.006248    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:22.070295    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:22:22.425928    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:22:22.444230    4424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:22:22.451290    4424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:22:22.540661    4424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:22:24.151685    4424 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.7257338s)
	I1216 06:22:24.151837    4424 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:22:24.529871    4424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.0785053s)
	I1216 06:22:24.529983    4424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.0856125s)
	I1216 06:22:24.530029    4424 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.9893406s)
	I1216 06:22:24.535621    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:24.547997    4424 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:22:20.178315    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:20.202308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:20.231344    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.231344    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:20.236317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:20.279459    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.279459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:20.283465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:20.322463    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.322463    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:20.327465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:20.366466    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.366466    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:20.371478    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:20.409468    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.409468    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:20.413471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:20.447432    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.447432    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:20.451099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:20.486103    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.486103    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:20.490094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:20.530098    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.530098    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:20.530098    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:20.530098    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:20.557089    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:20.557089    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:20.606234    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:20.607239    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:20.667498    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:20.667498    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:20.703674    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:20.703674    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:20.796605    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.300916    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:23.324266    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:23.355598    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.355598    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:23.359141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:23.390554    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.390644    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:23.394340    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:23.423019    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.423019    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:23.426772    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:23.456953    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.457021    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:23.460762    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:23.491477    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.491477    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:23.495183    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:23.527107    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.527107    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:23.531577    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:23.559306    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.559306    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:23.563381    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:23.592615    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.592615    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:23.592615    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:23.592615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:23.630103    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:23.630103    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:23.719384    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.719514    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:23.719546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:23.746097    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:23.746097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:23.807727    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:23.807727    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:24.550004    4424 addons.go:530] duration metric: took 2.6879945s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:22:24.591996    4424 node_ready.go:35] waiting up to 15m0s for node "kubenet-030800" to be "Ready" ...
	I1216 06:22:24.646202    4424 node_ready.go:49] node "kubenet-030800" is "Ready"
	I1216 06:22:24.646202    4424 node_ready.go:38] duration metric: took 54.2051ms for node "kubenet-030800" to be "Ready" ...
	I1216 06:22:24.646202    4424 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:22:24.652200    4424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:24.721472    4424 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-030800" context rescaled to 1 replicas
	I1216 06:22:24.735392    4424 api_server.go:72] duration metric: took 2.87338s to wait for apiserver process to appear ...
	I1216 06:22:24.735392    4424 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:22:24.735392    4424 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56385/healthz ...
	I1216 06:22:24.821241    4424 api_server.go:279] https://127.0.0.1:56385/healthz returned 200:
	ok
	I1216 06:22:24.825583    4424 api_server.go:141] control plane version: v1.34.2
	I1216 06:22:24.825583    4424 api_server.go:131] duration metric: took 90.1899ms to wait for apiserver health ...
	I1216 06:22:24.825583    4424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:22:24.832936    4424 system_pods.go:59] 8 kube-system pods found
	I1216 06:22:24.833022    4424 system_pods.go:61] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.833022    4424 system_pods.go:61] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.833022    4424 system_pods.go:61] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:24.833022    4424 system_pods.go:61] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:24.833131    4424 system_pods.go:74] duration metric: took 7.4392ms to wait for pod list to return data ...
	I1216 06:22:24.833131    4424 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:22:24.838156    4424 default_sa.go:45] found service account: "default"
	I1216 06:22:24.838156    4424 default_sa.go:55] duration metric: took 5.0253ms for default service account to be created ...
	I1216 06:22:24.838156    4424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:22:24.844228    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:24.844228    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.844228    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.844228    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:24.844228    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:24.844228    4424 retry.go:31] will retry after 236.325715ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.105694    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.105749    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.105770    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.105770    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.105770    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.105848    4424 retry.go:31] will retry after 372.640753ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.532382    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.532482    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.532513    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.532513    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.532587    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.532611    4424 retry.go:31] will retry after 313.138834ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.853141    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.853661    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.853661    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.853661    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.853661    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.853715    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.853777    4424 retry.go:31] will retry after 472.942865ms: missing components: kube-dns, kube-proxy
	I1216 06:22:26.382913    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:26.404112    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:26.436722    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.436722    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:26.440749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:26.470877    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.470877    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:26.474941    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:26.503887    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.503950    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:26.508216    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:26.538317    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.538317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:26.542754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:26.571126    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.571189    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:26.574883    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:26.604762    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.604762    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:26.608705    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:26.637404    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.637444    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:26.641214    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:26.669720    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.669720    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:26.669720    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:26.669720    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:26.707289    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:26.707289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:26.791357    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:26.791357    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:26.791357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:26.817227    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:26.817227    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:26.865832    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:26.865832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.436231    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:29.459817    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:29.493134    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.493186    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:29.497118    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:29.526722    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.526722    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:29.531481    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:29.561672    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.561718    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:29.566882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:29.595896    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.595947    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:29.599655    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:29.628575    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.628661    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:29.632644    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:29.660164    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.660164    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:29.663829    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:29.694413    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.694413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:29.698152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:29.725286    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.725286    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:29.725355    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:29.725355    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.787721    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:29.787721    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:29.828376    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:29.828376    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:29.916249    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:29.916249    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:29.916249    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:29.942276    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:29.942276    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:26.336069    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:26.336069    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:26.336069    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:26.336069    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Running
	I1216 06:22:26.336069    4424 system_pods.go:126] duration metric: took 1.4978916s to wait for k8s-apps to be running ...
	I1216 06:22:26.336069    4424 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:22:26.342244    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:22:26.368294    4424 system_svc.go:56] duration metric: took 32.1861ms WaitForService to wait for kubelet
	I1216 06:22:26.368345    4424 kubeadm.go:587] duration metric: took 4.5062595s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:22:26.368345    4424 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:22:26.376647    4424 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:22:26.376691    4424 node_conditions.go:123] node cpu capacity is 16
	I1216 06:22:26.376745    4424 node_conditions.go:105] duration metric: took 8.3456ms to run NodePressure ...
	I1216 06:22:26.376745    4424 start.go:242] waiting for startup goroutines ...
	I1216 06:22:26.376745    4424 start.go:247] waiting for cluster config update ...
	I1216 06:22:26.376795    4424 start.go:256] writing updated cluster config ...
	I1216 06:22:26.382913    4424 ssh_runner.go:195] Run: rm -f paused
	I1216 06:22:26.391122    4424 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:26.399112    4424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:28.410987    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	W1216 06:22:30.912607    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	I1216 06:22:32.497361    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:32.517362    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:32.549841    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.549912    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:32.553592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:32.582070    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.582070    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:32.585068    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:32.612095    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.612095    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:32.615889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:32.644953    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.644953    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:32.649025    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:32.676348    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.676429    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:32.680134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:32.708040    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.708040    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:32.712034    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:32.745789    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.745789    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:32.752533    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:32.781449    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.781504    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:32.781504    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:32.781504    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:32.843135    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:32.843135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:32.881564    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:32.881564    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:32.982597    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:32.982597    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:32.982597    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:33.013212    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:33.013212    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:22:33.410898    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	W1216 06:22:35.912070    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	I1216 06:22:35.578218    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:35.601163    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:35.629786    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.629786    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:35.634440    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:35.663168    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.663168    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:35.667699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:35.699050    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.699050    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:35.703224    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:35.736149    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.736149    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:35.741542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:35.772450    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.772450    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:35.776692    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:35.804150    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.804150    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:35.808799    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:35.837871    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.837871    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:35.841100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:35.870769    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.870769    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:35.870769    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:35.870769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:35.934803    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:35.934803    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:35.973201    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:35.973201    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:36.070057    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:36.070057    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:36.070057    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:36.098690    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:36.098690    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:38.663786    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:38.688639    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:38.718646    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.718646    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:38.721640    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:38.751651    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.751651    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:38.754647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:38.784327    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.784327    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:38.788327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:38.815337    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.815337    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:38.818328    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:38.846331    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.846331    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:38.849339    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:38.880297    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.880297    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:38.884227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:38.917702    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.917702    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:38.920940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:38.964973    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.964973    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:38.964973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:38.964973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:38.999971    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:38.999971    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:39.102927    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:39.102927    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:39.102927    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:39.141934    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:39.141934    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:39.210081    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:39.210081    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:36.404625    4424 pod_ready.go:99] pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8qrgg" not found
	I1216 06:22:36.404625    4424 pod_ready.go:86] duration metric: took 10.0053735s for pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:36.404625    4424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7zmc" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:38.415310    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:40.417680    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:41.775031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:41.798710    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:41.831778    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.831778    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:41.835461    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:41.866411    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.866411    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:41.871544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:41.902486    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.902486    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:41.905907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:41.932887    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.932887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:41.935886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:41.965890    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.965890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:41.968887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:42.000893    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.000893    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:42.004876    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:42.043522    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.043591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:42.049149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:42.081678    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.081678    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:42.081678    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:42.081678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:42.140208    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:42.140208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:42.198197    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:42.198197    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:42.241586    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:42.241586    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:42.350617    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:42.350617    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:42.350617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:44.884303    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:44.902304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:44.933421    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.933421    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:44.938149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:44.974292    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.974334    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:44.977512    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	W1216 06:22:42.418518    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:44.914304    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:45.010620    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.010620    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:45.013618    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:45.047628    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.047628    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:45.050627    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:45.089756    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.089850    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:45.096356    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:45.137323    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.137323    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:45.141322    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:45.169330    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.170335    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:45.173321    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:45.202336    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.202336    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:45.202336    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:45.202336    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:45.227331    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:45.227331    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:45.275577    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:45.275630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:45.335206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:45.335206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:45.372222    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:45.372222    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:45.471935    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:47.976320    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:48.004505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:48.037430    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.037430    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:48.040437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:48.076428    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.076477    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:48.081194    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:48.118536    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.118536    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:48.124810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:48.153702    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.153702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:48.159558    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:48.187736    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.187736    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:48.192607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:48.225619    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.225619    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:48.229580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:48.260085    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.260085    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:48.263087    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:48.294313    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.294376    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:48.294376    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:48.294425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:48.345094    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:48.345094    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:48.423576    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:48.423576    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:48.459577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:48.459577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:48.548441    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:48.548441    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:48.548441    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:22:47.414818    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:49.417236    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:51.080561    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:51.104134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:51.132144    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.132144    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:51.136151    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:51.163962    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.163962    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:51.169361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:51.198404    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.198404    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:51.201253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:51.229899    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.229899    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:51.232895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:51.261881    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.261881    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:51.264887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:51.295306    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.295306    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:51.298763    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:51.331779    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.331850    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:51.337211    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:51.367502    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.367502    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:51.367502    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:51.367502    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:51.424226    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:51.424226    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:51.482475    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:51.482475    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:51.527426    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:51.527426    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:51.618444    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:51.618444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:51.618444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.148108    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:54.167190    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:54.198456    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.198456    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:54.202605    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:54.236901    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.236901    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:54.240906    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:54.272541    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.272541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:54.277008    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:54.312764    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.312764    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:54.317359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:54.347564    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.347564    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:54.350557    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:54.377557    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.377557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:54.381564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:54.411585    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.411585    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:54.415565    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:54.447567    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.447567    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:54.447567    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:54.447567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:54.483559    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:54.483559    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:54.589583    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:54.589583    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:54.589583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.617283    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:54.617349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:54.673906    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:54.673990    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 06:22:51.420194    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:53.916809    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:55.919718    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:58.419688    4424 pod_ready.go:94] pod "coredns-66bc5c9577-w7zmc" is "Ready"
	I1216 06:22:58.419688    4424 pod_ready.go:86] duration metric: took 22.0147573s for pod "coredns-66bc5c9577-w7zmc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.424677    4424 pod_ready.go:83] waiting for pod "etcd-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.432677    4424 pod_ready.go:94] pod "etcd-kubenet-030800" is "Ready"
	I1216 06:22:58.432677    4424 pod_ready.go:86] duration metric: took 7.9992ms for pod "etcd-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.435689    4424 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.459477    4424 pod_ready.go:94] pod "kube-apiserver-kubenet-030800" is "Ready"
	I1216 06:22:58.459477    4424 pod_ready.go:86] duration metric: took 22.793ms for pod "kube-apiserver-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.463834    4424 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.611617    4424 pod_ready.go:94] pod "kube-controller-manager-kubenet-030800" is "Ready"
	I1216 06:22:58.611617    4424 pod_ready.go:86] duration metric: took 147.7381ms for pod "kube-controller-manager-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.811398    4424 pod_ready.go:83] waiting for pod "kube-proxy-5b9l9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.211755    4424 pod_ready.go:94] pod "kube-proxy-5b9l9" is "Ready"
	I1216 06:22:59.211755    4424 pod_ready.go:86] duration metric: took 400.3513ms for pod "kube-proxy-5b9l9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.412761    4424 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.811735    4424 pod_ready.go:94] pod "kube-scheduler-kubenet-030800" is "Ready"
	I1216 06:22:59.811813    4424 pod_ready.go:86] duration metric: took 399.0464ms for pod "kube-scheduler-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.811850    4424 pod_ready.go:40] duration metric: took 33.4202632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:59.926671    4424 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:22:59.930035    4424 out.go:179] * Done! kubectl is now configured to use "kubenet-030800" cluster and "default" namespace by default
	I1216 06:22:57.250472    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:57.271468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:57.303800    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.303800    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:57.306801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:57.338803    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.338803    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:57.341800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:57.369018    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.369018    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:57.372806    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:57.403510    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.403510    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:57.406808    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:57.440995    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.440995    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:57.444225    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:57.475612    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.475612    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:57.479607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:57.509842    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.509842    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:57.513186    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:57.545981    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.545981    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:57.545981    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:57.545981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:57.636635    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:57.636635    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:57.636635    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:57.662639    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:57.662639    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:57.720464    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:57.720464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:57.782460    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:57.782460    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.324364    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:00.344368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:00.375358    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.375358    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:00.378355    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:00.410368    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.410368    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:00.414359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:00.442364    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.442364    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:00.446359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:00.476371    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.476371    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:00.479359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:00.508323    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.508323    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:00.512431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:00.550611    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.550611    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:00.553606    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:00.586336    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.586336    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:00.590552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:00.624129    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.624129    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:00.624129    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:00.624129    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:00.685547    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:00.685547    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.737417    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:00.737417    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:00.858025    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:00.858025    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:00.858025    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:00.886607    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:00.886607    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:03.463847    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:03.826614    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:03.881622    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.881622    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:03.887610    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:03.936557    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.937539    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:03.941562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:03.979542    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.979542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:03.983550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:04.020535    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.020535    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:04.025547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:04.064541    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.064541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:04.068548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:04.101538    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.101538    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:04.104544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:04.141752    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.141752    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:04.146757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:04.182755    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.182755    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:04.182755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:04.182755    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:04.305758    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:04.305758    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:04.356425    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:04.356425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:04.487429    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:04.487429    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:04.487429    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:04.526318    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:04.526362    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.087022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:07.110346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:07.137790    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.137790    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:07.141786    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:07.174601    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.174601    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:07.179419    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:07.211656    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.211656    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:07.216897    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:07.250459    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.250459    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:07.254048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:07.282207    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.282207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:07.285851    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:07.313925    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.313925    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:07.317529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:07.348851    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.348851    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:07.353083    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:07.381401    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.381401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:07.381401    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:07.381401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:07.408641    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:07.408641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.450935    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:07.450935    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:07.512733    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:07.512733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:07.552522    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:07.552522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:07.649624    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.155054    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:10.178201    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:10.207068    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.207068    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:10.210473    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:10.239652    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.239652    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:10.242766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:10.274887    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.274887    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:10.278519    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:10.308294    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.308351    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:10.312209    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:10.342572    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.342572    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:10.346437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:10.375569    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.375630    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:10.378861    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:10.405446    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.405446    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:10.410730    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:10.441244    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.441244    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:10.441244    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:10.441244    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:10.502753    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:10.502753    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:10.540437    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:10.540437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:10.626853    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.626853    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:10.626853    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:10.654987    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:10.655058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.213336    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:13.237358    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:13.266636    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.266721    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:13.270023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:13.297369    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.297434    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:13.300782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:13.336039    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.336039    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:13.341919    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:13.370523    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.370523    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:13.374455    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:13.404606    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.404606    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:13.408542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:13.437373    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.437431    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:13.441106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:13.470738    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.470738    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:13.474495    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:13.502203    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.502262    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:13.502262    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:13.502293    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.552578    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:13.552578    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:13.617499    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:13.617499    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:13.660047    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:13.660047    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:13.747316    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:13.747316    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:13.747316    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.284216    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:16.307907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:16.344535    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.344535    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:16.347847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:16.379001    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.379021    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:16.382292    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:16.413093    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.413116    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:16.418012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:16.456763    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.456826    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:16.460621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:16.491671    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.491693    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:16.495352    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:16.527862    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.527862    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:16.534704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:16.564194    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.564243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:16.570369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:16.601444    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.601444    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:16.601444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:16.601444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.631785    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:16.631785    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:16.675190    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:16.675190    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:16.737700    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:16.737700    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:16.775092    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:16.775092    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:16.865026    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.370669    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:19.393524    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:19.423405    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.423513    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:19.427307    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:19.459137    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.459238    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:19.462635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:19.493542    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.493542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:19.497334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:19.526496    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.526496    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:19.529949    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:19.559120    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.559120    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:19.562460    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:19.591305    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.591305    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:19.595794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:19.625200    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.626193    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:19.629187    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:19.657201    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.657201    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:19.657270    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:19.657270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:19.722496    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:19.722496    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:19.761161    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:19.761161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:19.852755    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.853756    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:19.853756    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:19.880330    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:19.881280    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.458668    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:22.483505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:22.514647    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.514647    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:22.518193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:22.551494    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.551494    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:22.555268    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:22.586119    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.586119    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:22.590107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:22.621733    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.621733    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:22.624739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:22.651728    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.651728    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:22.655725    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:22.687826    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.687826    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:22.692217    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:22.727413    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.727413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:22.731318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:22.769477    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.769477    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:22.770462    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:22.770462    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:22.795455    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:22.795455    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.851473    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:22.851473    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:22.911454    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:22.912459    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:22.948112    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:22.948112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:23.039238    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:25.544174    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:25.571784    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:25.610368    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.610422    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:25.615377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:25.651080    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.651129    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:25.655234    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:25.695942    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.695942    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:25.700548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:25.727743    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.727743    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:25.730739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:25.765620    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.765650    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:25.769261    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:25.805072    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.805127    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:25.810318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:25.840307    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.840307    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:25.844490    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:25.888279    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.888279    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:25.888279    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:25.888279    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:25.964206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:25.964206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:26.003275    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:26.003275    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:26.111485    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:26.111485    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:26.111485    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:26.146819    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:26.146819    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:28.694382    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:28.716947    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:28.753062    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.753062    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:28.756810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:28.789692    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.789692    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:28.794681    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:28.823690    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.823690    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:28.827683    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:28.858686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.858686    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:28.861688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:28.891686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.891686    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:28.894684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:28.923683    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.923683    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:28.926684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:28.958314    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.958314    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:28.962325    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:28.991317    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.991317    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:28.991317    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:28.991317    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:29.039348    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:29.039348    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:29.103117    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:29.103117    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:29.148003    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:29.148003    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:29.240448    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:29.240448    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:29.240448    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:31.772923    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:31.796203    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:31.827485    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.827485    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:31.830572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:31.873718    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.873718    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:31.877445    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:31.926391    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.926391    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:31.929391    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:31.964572    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.964572    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:31.968096    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:32.003776    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.003776    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:32.007175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:32.046322    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.046322    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:32.049283    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:32.077299    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.077299    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:32.080289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:32.114717    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.114793    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:32.114793    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:32.114843    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:32.191987    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:32.191987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:32.237143    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:32.237143    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:32.331899    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:32.331899    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:32.331899    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:32.362021    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:32.362021    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:34.918825    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:34.945647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:34.976745    8452 logs.go:282] 0 containers: []
	W1216 06:23:34.976745    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:34.980636    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:35.012295    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.012295    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:35.015295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:35.047289    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.047289    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:35.050289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:35.081492    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.081492    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:35.085580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:35.121645    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.121645    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:35.126840    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:35.167976    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.167976    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:35.170966    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:35.201969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.201969    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:35.204969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:35.232969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.233980    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:35.233980    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:35.233980    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:35.292973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:35.292973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:35.327973    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:35.327973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:35.420114    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:35.420114    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:35.420114    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:35.451148    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:35.451148    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:38.010056    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:38.035506    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:38.071853    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.071853    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:38.075564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:38.106543    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.106543    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:38.109547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:38.143669    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.143669    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:38.152737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:38.191923    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.191923    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:38.195575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:38.225935    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.225935    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:38.228939    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:38.268550    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.268550    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:38.271759    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:38.304387    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.304421    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:38.307849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:38.341968    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.341968    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:38.341968    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:38.341968    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:38.404267    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:38.404267    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:38.443104    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:38.443104    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:38.551474    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:38.551474    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:38.551474    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:38.582843    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:38.582869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.141896    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:41.185331    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:41.218961    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.219548    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:41.223789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:41.252376    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.252376    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:41.255368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:41.285378    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.285378    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:41.288369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:41.318383    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.318383    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:41.321372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:41.349373    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.349373    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:41.353377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:41.390105    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.390105    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:41.393103    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:41.425109    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.425109    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:41.428107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:41.462594    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.462594    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:41.462594    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:41.462594    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:41.492096    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:41.492156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.553755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:41.553806    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:41.622329    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:41.622329    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:41.664016    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:41.664016    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:41.759009    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:44.265223    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:44.286309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:44.319583    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.319583    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:44.324575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:44.358046    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.358114    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:44.361895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:44.390541    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.390541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:44.395354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:44.433163    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.433163    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:44.436754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:44.470605    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.470605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:44.475856    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:44.504412    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.504484    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:44.508013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:44.540170    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.540170    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:44.545802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:44.574593    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.575118    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:44.575181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:44.575181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:44.609181    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:44.609231    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:44.663988    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:44.663988    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:44.737678    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:44.737678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:44.777530    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:44.777530    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:44.868751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:47.373432    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:47.674375    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:47.705067    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.705067    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:47.709193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:47.739921    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.739921    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:47.743656    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:47.771970    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.771970    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:47.776451    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:47.808633    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.808633    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:47.813124    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:47.856079    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.856079    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:47.859452    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:47.891897    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.891897    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:47.895769    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:47.926050    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.926050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:47.929679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:47.962571    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.962571    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:47.962571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:47.962571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:48.026367    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:48.026367    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:48.063580    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:48.063580    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:48.173751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:48.173792    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:48.173792    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:48.199403    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:48.199403    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:50.750699    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:50.774573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:50.804983    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.804983    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:50.808894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:50.838533    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.838533    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:50.842242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:50.873377    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.873377    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:50.877508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:50.907646    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.907646    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:50.912410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:50.943853    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.943853    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:50.950275    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:50.977570    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.977570    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:50.982575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:51.010211    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.010211    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:51.014545    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:51.048584    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.048584    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:51.048584    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:51.048584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:51.112725    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:51.112725    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:51.150854    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:51.150854    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:51.246494    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:51.246535    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:51.246535    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:51.274873    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:51.274873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:53.832981    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:53.857995    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:53.892159    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.892159    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:53.895775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:53.926160    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.926160    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:53.929408    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:53.956482    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.956552    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:53.959711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:53.989788    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.989788    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:53.993230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:54.022506    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.022506    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:54.025409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:54.054974    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.054974    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:54.059372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:54.088015    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.088015    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:54.092123    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:54.121961    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.121961    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:54.121961    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:54.121961    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:54.169232    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:54.169295    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:54.230158    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:54.231156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:54.267713    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:54.267713    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:54.368006    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:54.368006    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:54.368006    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:56.899723    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:56.923149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:56.957635    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.957635    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:56.961499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:56.988363    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.988363    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:56.992371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:57.021993    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.021993    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:57.025544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:57.055718    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.055718    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:57.060969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:57.092456    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.092523    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:57.096418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:57.125588    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.125588    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:57.129665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:57.160663    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.160663    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:57.164518    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:57.196231    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.196281    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:57.196281    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:57.196281    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:57.258973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:57.258973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:57.302939    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:57.302939    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:57.397577    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:57.397577    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:57.397577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:57.434801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:57.434801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:59.991022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:00.014170    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:00.046529    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.046529    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:00.050903    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:00.080796    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.080796    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:00.084418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:00.114858    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.114858    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:00.121404    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:00.152596    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.152596    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:00.156447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:00.183532    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.183648    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:00.187074    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:00.218971    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.218971    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:00.222929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:00.252086    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.252086    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:00.256309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:00.285884    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.285884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:00.285884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:00.285884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:00.364208    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:00.364208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:00.403464    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:00.403464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:00.495864    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:00.495864    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:00.495864    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:00.521592    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:00.521592    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:03.070724    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:03.093858    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:03.127112    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.127112    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:03.131265    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:03.161262    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.161262    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:03.165073    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:03.195882    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.195933    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:03.200488    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:03.230205    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.230205    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:03.234193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:03.263580    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.263629    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:03.267410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:03.297599    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.297652    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:03.300957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:03.329666    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.329720    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:03.333378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:03.365184    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.365236    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:03.365282    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:03.365282    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:03.428385    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:03.428385    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:03.465984    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:03.465984    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:03.557873    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:03.559101    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:03.559101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:03.586791    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:03.586791    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:06.142562    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:06.170227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:06.202672    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.202672    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:06.206691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:06.237624    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.237624    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:06.241559    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:06.267616    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.267616    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:06.271709    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:06.304567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.304567    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:06.308556    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:06.337567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.337567    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:06.344744    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:06.373520    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.373520    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:06.377184    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:06.411936    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.411936    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:06.415789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:06.447263    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.447263    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:06.447263    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:06.447263    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:06.509097    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:06.509097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:06.546188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:06.546188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:06.639923    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:06.639923    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:06.639923    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:06.666485    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:06.666519    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.221249    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:09.244788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:09.276490    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.276490    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:09.280706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:09.309520    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.309520    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:09.313105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:09.339092    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.339092    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:09.343484    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:09.369046    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.369046    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:09.373188    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:09.403810    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.403810    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:09.407108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:09.437156    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.437156    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:09.441754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:09.469752    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.469810    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:09.473378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:09.503754    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.503754    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:09.503754    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:09.503754    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:09.533645    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:09.533718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.587529    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:09.587529    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:09.647801    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:09.647801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:09.686577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:09.686577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:09.782674    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.288199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:12.313967    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:12.344043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.344043    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:12.348347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:12.378683    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.378683    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:12.382106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:12.411599    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.411599    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:12.415131    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:12.445826    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.445873    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:12.450940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:12.481043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.481078    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:12.484800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:12.512969    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.512990    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:12.515915    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:12.548151    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.548228    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:12.551706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:12.584039    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.584039    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:12.584039    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:12.584039    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:12.646680    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:12.646680    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:12.686545    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:12.686545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:12.804767    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.804767    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:12.804767    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:12.831866    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:12.831866    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:15.392415    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:15.416435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:15.445044    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.445044    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:15.449260    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:15.476688    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.476688    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:15.481012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:15.508866    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.508928    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:15.512662    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:15.541002    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.541002    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:15.545363    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:15.574947    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.574991    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:15.578407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:15.604751    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.604751    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:15.608699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:15.639261    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.639338    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:15.642317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:15.674404    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.674404    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:15.674404    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:15.674404    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:15.736218    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:15.736218    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:15.774188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:15.774188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:15.862546    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:15.862546    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:15.862546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:15.888115    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:15.888115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.441031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:18.465207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:18.495447    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.495481    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:18.498929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:18.528412    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.528476    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:18.531543    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:18.560175    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.560175    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:18.563996    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:18.592824    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.592894    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:18.596175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:18.623746    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.623746    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:18.627099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:18.652978    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.653013    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:18.656407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:18.683637    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.683686    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:18.687125    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:18.716903    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.716942    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:18.716964    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:18.716981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:18.743123    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:18.743675    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.794891    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:18.794891    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:18.858345    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:18.858345    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:18.894242    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:18.894242    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:18.979844    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:21.485585    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:21.510290    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:21.539823    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.539823    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:21.543159    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:21.575241    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.575241    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:21.579330    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:21.607389    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.607490    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:21.611023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:21.642332    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.642332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:21.645973    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:21.671339    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.671390    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:21.675048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:21.704483    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.704483    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:21.708499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:21.734944    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.735027    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:21.738688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:21.768890    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.768890    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:21.768987    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:21.768987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:21.800297    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:21.800344    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:21.854571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:21.854571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:21.921230    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:21.921230    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:21.961787    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:21.961787    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:22.060842    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.566957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:24.591909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:24.624010    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.624010    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:24.627550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:24.657938    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.657938    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:24.661917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:24.688848    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.688848    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:24.692388    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:24.722130    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.722165    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:24.725802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:24.754067    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.754134    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:24.757294    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:24.783522    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.783595    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:24.787022    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:24.818531    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.818531    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:24.822200    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:24.851316    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.851371    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:24.851391    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:24.851391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:24.940030    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.941511    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:24.941511    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:24.967127    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:24.967127    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:25.018271    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:25.018358    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:25.077769    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:25.077769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:27.621222    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:27.644179    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:27.675033    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.675033    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:27.678724    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:27.707945    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.707945    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:27.712443    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:27.740635    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.740635    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:27.744539    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:27.775332    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.775332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:27.779621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:27.807884    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.807884    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:27.812207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:27.843877    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.843877    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:27.850126    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:27.878365    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.878365    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:27.883323    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:27.911733    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.911733    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:27.911733    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:27.911733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:27.975085    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:27.975085    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:28.011926    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:28.011926    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:28.117959    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:28.117959    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:28.117959    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:28.146135    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:28.146135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:30.702904    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:30.732783    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:30.768726    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.768726    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:30.772432    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:30.804888    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.804888    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:30.809005    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:30.839403    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.839403    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:30.843668    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:30.874013    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.874013    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:30.878013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:30.906934    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.906934    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:30.911178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:30.936942    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.936942    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:30.940954    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:30.967843    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.967843    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:30.973798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:31.000614    8452 logs.go:282] 0 containers: []
	W1216 06:24:31.000614    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:31.000614    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:31.000614    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:31.063545    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:31.063545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:31.101704    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:31.101704    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:31.201356    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:31.201356    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:31.201356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:31.229634    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:31.229634    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:33.780745    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:33.805148    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:33.836319    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.836319    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:33.840094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:33.872138    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.872167    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:33.875487    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:33.908318    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.908318    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:33.912197    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:33.940179    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.940223    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:33.944152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:33.974912    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.974912    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:33.978728    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:34.004557    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.004557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:34.008971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:34.037591    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.037591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:34.041537    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:34.073153    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.073153    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:34.073153    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:34.073153    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:34.139585    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:34.139585    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:34.177888    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:34.177888    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:34.273589    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:34.273589    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:34.273589    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:34.298805    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:34.298805    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:36.851957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:36.889887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:36.919682    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.919682    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:36.923468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:36.953008    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.953073    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:36.957253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:36.985770    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.985770    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:36.989059    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:37.015702    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.015702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:37.019508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:37.046311    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.046351    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:37.050327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:37.087936    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.087936    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:37.092175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:37.121271    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.121271    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:37.125767    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:37.153753    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.153814    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:37.153814    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:37.153869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:37.218058    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:37.218058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:37.256162    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:37.257161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:37.349292    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:37.349292    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:37.349292    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:37.378861    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:37.379384    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:39.931797    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:39.956069    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:39.991154    8452 logs.go:282] 0 containers: []
	W1216 06:24:39.991154    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:39.994809    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:40.021488    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.021488    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:40.025604    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:40.055464    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.055464    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:40.059576    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:40.085410    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.086402    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:40.090048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:40.120389    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.120389    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:40.125766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:40.159925    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.159962    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:40.163912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:40.190820    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.190820    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:40.194350    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:40.223821    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.223886    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:40.223886    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:40.223886    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:40.292033    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:40.292033    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:40.331274    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:40.331274    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:40.423708    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:40.423708    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:40.423708    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:40.452101    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:40.452101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.005925    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:43.029165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:43.060601    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.060601    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:43.064304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:43.092446    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.092446    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:43.096552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:43.127295    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.127347    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:43.130913    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:43.159919    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.159986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:43.163049    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:43.190310    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.190384    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:43.194093    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:43.223641    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.223641    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:43.227270    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:43.254592    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.254592    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:43.259912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:43.293166    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.293166    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:43.293166    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:43.293166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:43.328685    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:43.328685    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:43.412970    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:43.413012    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:43.413042    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:43.444573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:43.444573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.501857    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:43.501857    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.068154    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:46.095291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:46.125740    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.125740    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:46.131016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:46.160926    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.160926    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:46.164909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:46.192634    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.192634    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:46.196346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:46.224203    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.224203    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:46.228650    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:46.255541    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.255541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:46.259732    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:46.289377    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.289377    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:46.293566    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:46.321342    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.321342    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:46.325492    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:46.352311    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.352342    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:46.352342    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:46.352382    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.416761    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:46.416761    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:46.469641    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:46.469641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:46.580672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:46.581191    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:46.581229    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:46.608166    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:46.608166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:49.162834    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:49.187402    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:49.219893    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.219893    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:49.223424    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:49.252338    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.252338    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:49.255900    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:49.286106    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.286131    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:49.289776    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:49.317141    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.317141    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:49.322761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:49.353605    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.353605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:49.357674    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:49.385747    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.385793    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:49.388757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:49.417812    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.417812    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:49.421500    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:49.452746    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.452797    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:49.452797    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:49.452797    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:49.516432    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:49.516432    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:49.553647    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:49.553647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:49.647049    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:49.647087    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:49.647087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:49.671889    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:49.671889    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:52.224199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:52.248067    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:52.282412    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.282412    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:52.286308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:52.315955    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.315955    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:52.319894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:52.353188    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.353188    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:52.356528    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:52.387579    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.387579    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:52.392336    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:52.421909    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.421909    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:52.425890    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:52.458902    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.458902    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:52.462430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:52.498067    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.498140    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:52.501354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:52.528125    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.528125    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:52.528125    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:52.528125    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:52.593845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:52.593845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:52.632779    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:52.632779    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:52.732902    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:52.732902    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:52.732902    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:52.762437    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:52.762437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.328400    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:55.355014    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:55.387364    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.387364    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:55.391085    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:55.417341    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.417341    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:55.421141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:55.450785    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.450785    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:55.454454    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:55.482484    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.482484    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:55.486100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:55.513682    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.513682    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:55.517291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:55.548548    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.548548    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:55.552971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:55.583380    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.583380    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:55.587471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:55.618619    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.618619    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:55.618619    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:55.618686    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:55.646962    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:55.646962    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.695480    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:55.695480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:55.757470    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:55.757470    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:55.796071    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:55.796071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:55.889833    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.396122    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:58.423573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:58.454757    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.454757    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:58.460430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:58.490597    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.490597    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:58.493832    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:58.523149    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.523149    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:58.526960    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:58.558649    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.558649    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:58.562228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:58.591400    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.591400    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:58.595569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:58.624162    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.624162    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:58.628070    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:58.660578    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.660578    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:58.664236    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:58.693155    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.693155    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:58.693155    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:58.693155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:58.732408    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:58.733409    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:58.823465    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.823465    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:58.823465    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:58.848772    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:58.848772    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:58.900567    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:58.900567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.465828    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:01.490385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:01.520316    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.520316    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:01.524299    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:01.555350    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.555350    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:01.559239    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:01.587077    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.587077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:01.591421    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:01.623853    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.623853    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:01.627746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:01.658165    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.658165    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:01.661588    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:01.703310    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.703310    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:01.709361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:01.740903    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.740903    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:01.744287    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:01.773431    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.773431    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:01.773431    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:01.773431    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:01.863541    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:01.863541    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:01.863541    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:01.891816    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:01.891816    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:01.936351    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:01.936351    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.997563    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:01.997563    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.541470    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:04.565886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:04.595881    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.595881    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:04.599716    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:04.629724    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.629749    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:04.633814    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:04.666020    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.666047    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:04.669510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:04.699730    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.699730    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:04.704016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:04.734540    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.734540    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:04.738414    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:04.765651    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.765651    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:04.769397    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:04.797315    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.797315    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:04.801409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:04.832845    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.832845    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:04.832845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:04.832845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.869617    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:04.869617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:04.958334    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:04.958334    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:04.958334    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:04.983497    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:04.983497    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:05.037861    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:05.037887    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.603239    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:07.626775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:07.655146    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.655146    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:07.658648    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:07.688192    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.688227    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:07.691749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:07.723836    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.723836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:07.727536    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:07.761238    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.761238    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:07.764987    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:07.792890    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.792890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:07.796847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:07.824734    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.824734    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:07.828821    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:07.859399    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.859399    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:07.862780    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:07.893406    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.893406    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:07.893457    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:07.893480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.954656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:07.954656    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:07.992200    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:07.993203    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:08.077979    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:08.077979    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:08.077979    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:08.102718    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:08.102718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:10.662101    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:10.688889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:10.721934    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.721996    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:10.727012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:10.760697    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.760746    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:10.763961    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:10.791222    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.791293    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:10.795121    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:10.826239    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.826317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:10.829753    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:10.857355    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.857355    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:10.861145    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:10.903922    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.903922    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:10.907990    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:10.937216    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.937281    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:10.940707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:10.969086    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.969086    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:10.969086    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:10.969238    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:11.062109    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:11.062109    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:11.062109    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:11.090185    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:11.090185    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:11.141444    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:11.141444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:11.199181    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:11.199181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:13.741347    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:13.766441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:13.800424    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.800424    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:13.805169    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:13.835040    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.835040    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:13.839295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:13.864861    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.866077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:13.869598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:13.898887    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.898887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:13.903167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:13.931208    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.931208    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:13.936649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:13.963722    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.963722    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:13.967474    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:13.998640    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.998640    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:14.002572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:14.031349    8452 logs.go:282] 0 containers: []
	W1216 06:25:14.031401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:14.031401    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:14.031401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:14.124587    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:14.124587    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:14.124714    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:14.153583    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:14.153583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:14.202636    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:14.202636    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:14.260591    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:14.260591    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:16.808603    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:16.833787    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:16.864300    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.864300    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:16.868592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:16.897549    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.897549    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:16.900917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:16.931516    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.931557    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:16.936698    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:16.965053    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.965053    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:16.969015    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:16.997017    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.997017    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:17.000551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:17.028733    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.028733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:17.032830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:17.062242    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.062242    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:17.066193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:17.096111    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.096186    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:17.096186    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:17.096243    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:17.126801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:17.126801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:17.178392    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:17.178392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:17.239223    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:17.239223    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:17.276363    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:17.277364    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:17.362910    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:19.869062    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:19.894371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:19.924915    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.924915    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:19.929351    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:19.956535    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.956535    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:19.960534    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:19.989334    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.989334    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:19.993202    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:20.021108    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.021108    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:20.025230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:20.054251    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.054251    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:20.057788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:20.088787    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.088860    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:20.092250    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:20.120577    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.120577    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:20.123857    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:20.153015    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.153015    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:20.153015    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:20.153015    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:20.241391    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:20.241391    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:20.241391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:20.267492    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:20.267554    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:20.321240    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:20.321880    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:20.384978    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:20.384978    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:22.926087    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:22.949774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:22.982854    8452 logs.go:282] 0 containers: []
	W1216 06:25:22.982854    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:22.986923    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:23.017638    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.017638    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:23.021130    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:23.052442    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.052667    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:23.058175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:23.085210    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.085210    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:23.089664    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:23.120747    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.120795    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:23.124581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:23.150600    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.150600    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:23.154602    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:23.182147    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.182147    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:23.185649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:23.217087    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.217087    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:23.217087    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:23.217087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:23.280619    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:23.280619    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:23.318090    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:23.318090    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:23.406270    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:23.406270    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:23.406270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:23.435128    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:23.435128    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:25.989934    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:26.012706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:26.043141    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.043141    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:26.047435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:26.075985    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.075985    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:26.079830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:26.110575    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.110575    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:26.113774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:26.144668    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.144668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:26.148428    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:26.175392    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.175392    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:26.179120    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:26.211067    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.211067    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:26.215072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:26.243555    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.243586    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:26.246934    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:26.279876    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.279876    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:26.279876    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:26.279876    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:26.387447    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:26.387488    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:26.387537    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:26.413896    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:26.413896    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:26.462318    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:26.462318    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:26.527832    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:26.527832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.072565    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:29.096390    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:29.127989    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.127989    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:29.131385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:29.158741    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.158741    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:29.162538    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:29.190346    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.190346    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:29.193798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:29.222234    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.222234    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:29.225740    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:29.252553    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.252553    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:29.256489    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:29.285679    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.285733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:29.289742    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:29.320841    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.321050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:29.324841    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:29.352461    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.352587    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:29.352615    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:29.352615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:29.419045    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:29.419045    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.457659    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:29.457659    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:29.544155    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:29.544155    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:29.544155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:29.571612    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:29.571646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:32.139910    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:32.164438    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:32.196526    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.196526    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:32.200231    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:32.226279    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.226279    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:32.230146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:32.257831    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.257831    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:32.262665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:32.293641    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.293641    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:32.297746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:32.327055    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.327055    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:32.331274    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:32.362206    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.362206    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:32.365146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:32.394600    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.394600    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:32.400058    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:32.428075    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.428075    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:32.428075    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:32.428075    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:32.491661    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:32.491661    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:32.528847    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:32.528847    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:32.616464    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:32.616464    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:32.616464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:32.642397    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:32.642397    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:35.191852    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:35.225285    8452 out.go:203] 
	W1216 06:25:35.227244    8452 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1216 06:25:35.227244    8452 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1216 06:25:35.227244    8452 out.go:285] * Related issues:
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1216 06:25:35.230096    8452 out.go:203] 
	
	
	==> Docker <==
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162855054Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162940064Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162949966Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162955666Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.162961567Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.163040877Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.163140989Z" level=info msg="Initializing buildkit"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.281453678Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.293658962Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.293830383Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.293958199Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:19:30 newest-cni-256200 dockerd[929]: time="2025-12-16T06:19:30.294017906Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:19:30 newest-cni-256200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:19:31 newest-cni-256200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:19:31 newest-cni-256200 cri-dockerd[1224]: time="2025-12-16T06:19:31Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:19:31 newest-cni-256200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:52.154030   20458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:52.155076   20458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:52.157347   20458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:52.158610   20458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:52.159788   20458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.633501] CPU: 10 PID: 466820 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f865800db20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f865800daf6.
	[  +0.000001] RSP: 002b:00007ffc8c624780 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000033] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.839091] CPU: 12 PID: 466960 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fa6af131b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fa6af131af6.
	[  +0.000001] RSP: 002b:00007ffe97387e50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 06:22] tmpfs: Unknown parameter 'noswap'
	[  +9.428310] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 06:25:52 up  2:02,  0 user,  load average: 1.20, 3.21, 3.84
	Linux newest-cni-256200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:25:48 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:49 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 16 06:25:49 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:49 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:49 newest-cni-256200 kubelet[20272]: E1216 06:25:49.367314   20272 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:49 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:49 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:50 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 16 06:25:50 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:50 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:50 newest-cni-256200 kubelet[20293]: E1216 06:25:50.103906   20293 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:50 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:50 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:50 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 16 06:25:50 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:50 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:50 newest-cni-256200 kubelet[20326]: E1216 06:25:50.855324   20326 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:50 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:50 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:25:51 newest-cni-256200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
	Dec 16 06:25:51 newest-cni-256200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:51 newest-cni-256200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:25:51 newest-cni-256200 kubelet[20341]: E1216 06:25:51.606527   20341 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:25:51 newest-cni-256200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:25:51 newest-cni-256200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-256200 -n newest-cni-256200: exit status 2 (563.5627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-256200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (12.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (225.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1216 06:31:18.333343   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:32:01.859770   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:32:16.460801   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:32:25.219160   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:32:43.902110   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:32:44.168874   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:32:45.089226   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:33:01.103891   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:33:07.184280   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:33:20.291325   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:33:24.947471   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:33:28.812978   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:34:04.016782   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:34:30.257846   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55116/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1216 06:34:52.012721   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300: exit status 2 (599.3197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-686300" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-686300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-686300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (0s)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-686300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-686300
helpers_test.go:244: (dbg) docker inspect no-preload-686300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc",
	        "Created": "2025-12-16T06:04:57.603416998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 408764,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-16T06:15:50.357035984Z",
	            "FinishedAt": "2025-12-16T06:15:46.555763422Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/hosts",
	        "LogPath": "/var/lib/docker/containers/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc/f98924d46fbc1c81f76008c41e22dcf4b400d9d4d365ad664adf927ebba167cc-json.log",
	        "Name": "/no-preload-686300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-686300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-686300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6-init/diff:/var/lib/docker/overlay2/b16ee94315e9404a934c2f478657ca5e67808c5765d01866d34541ee9c82e90f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/856950ec24ebd4f7ebd2c7e018c0cd7e2b2acc8bee588eb74bfc967536fd00c6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-686300",
	                "Source": "/var/lib/docker/volumes/no-preload-686300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-686300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-686300",
	                "name.minikube.sigs.k8s.io": "no-preload-686300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58679b470f3820ec221a43ce0cb2eeb96c16084feb347cd3733ff5e676214bcf",
	            "SandboxKey": "/var/run/docker/netns/58679b470f38",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55112"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55113"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55114"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-686300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0c88c2c552749da4ec31002e488ccc5e8184c9ce7ba360033e35a2aa2c5aead9",
	                    "EndpointID": "43959eb122225f782ad58d938dd1f7bfc24c45960ef7507609ea418938e5d63c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-686300",
	                        "f98924d46fbc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300: exit status 2 (584.4871ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-686300 logs -n 25: (1.4764972s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-030800 sudo cat /var/lib/kubelet/config.yaml                         │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status docker --all --full --no-pager          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat docker --no-pager                          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/docker/daemon.json                              │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo docker system info                                       │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status cri-docker --all --full --no-pager      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat cri-docker --no-pager                      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cri-dockerd --version                                    │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status containerd --all --full --no-pager      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat containerd --no-pager                      │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /lib/systemd/system/containerd.service               │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo cat /etc/containerd/config.toml                          │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo containerd config dump                                   │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo systemctl status crio --all --full --no-pager            │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │                     │
	│ ssh     │ -p kubenet-030800 sudo systemctl cat crio --no-pager                            │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ ssh     │ -p kubenet-030800 sudo crio config                                              │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ delete  │ -p kubenet-030800                                                               │ kubenet-030800    │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:23 UTC │ 16 Dec 25 06:23 UTC │
	│ image   │ newest-cni-256200 image list --format=json                                      │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ pause   │ -p newest-cni-256200 --alsologtostderr -v=1                                     │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ unpause │ -p newest-cni-256200 --alsologtostderr -v=1                                     │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ delete  │ -p newest-cni-256200                                                            │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	│ delete  │ -p newest-cni-256200                                                            │ newest-cni-256200 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 06:25 UTC │ 16 Dec 25 06:25 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 06:21:31
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 06:21:31.068463    4424 out.go:360] Setting OutFile to fd 1300 ...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.112163    4424 out.go:374] Setting ErrFile to fd 1224...
	I1216 06:21:31.112163    4424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 06:21:31.126168    4424 out.go:368] Setting JSON to false
	I1216 06:21:31.128157    4424 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7112,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 06:21:31.129155    4424 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 06:21:31.133155    4424 out.go:179] * [kubenet-030800] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 06:21:31.136368    4424 notify.go:221] Checking for updates...
	I1216 06:21:31.137751    4424 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:31.140914    4424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 06:21:31.143313    4424 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 06:21:31.145626    4424 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 06:21:31.147629    4424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 06:21:31.150478    4424 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "newest-cni-256200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151126    4424 config.go:182] Loaded profile config "no-preload-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 06:21:31.151727    4424 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 06:21:31.272417    4424 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 06:21:31.275875    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.534539    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.516919297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.537553    4424 out.go:179] * Using the docker driver based on user configuration
	I1216 06:21:31.541211    4424 start.go:309] selected driver: docker
	I1216 06:21:31.541254    4424 start.go:927] validating driver "docker" against <nil>
	I1216 06:21:31.541286    4424 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 06:21:31.597589    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:31.842240    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:31.823958826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:31.842240    4424 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 06:21:31.843240    4424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:31.846236    4424 out.go:179] * Using Docker Desktop driver with root privileges
	I1216 06:21:31.848222    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:21:31.848222    4424 start.go:353] cluster config:
	{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:21:31.851222    4424 out.go:179] * Starting "kubenet-030800" primary control-plane node in "kubenet-030800" cluster
	I1216 06:21:31.860233    4424 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 06:21:31.863229    4424 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1216 06:21:31.866228    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:31.866228    4424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 06:21:31.866228    4424 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 06:21:31.866228    4424 cache.go:65] Caching tarball of preloaded images
	I1216 06:21:31.866228    4424 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1216 06:21:31.866228    4424 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 06:21:31.866228    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:31.866228    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json: {Name:mkd9bbe5249f898d86f7b7f3961735d2ed71d636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:31.935458    4424 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1216 06:21:31.935458    4424 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1216 06:21:31.935988    4424 cache.go:243] Successfully downloaded all kic artifacts
	I1216 06:21:31.936042    4424 start.go:360] acquireMachinesLock for kubenet-030800: {Name:mka6ae821c9ad8ee62e1a8eef0f2acffe81ebe64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 06:21:31.936202    4424 start.go:364] duration metric: took 160.2µs to acquireMachinesLock for "kubenet-030800"
	I1216 06:21:31.936352    4424 start.go:93] Provisioning new machine with config: &{Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:31.936477    4424 start.go:125] createHost starting for "" (driver="docker")
	I1216 06:21:30.055522    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:30.078529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:30.109519    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.109519    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:30.112511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:30.144765    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.144765    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:30.149313    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:30.181852    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.181852    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:30.184838    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:30.217464    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.217504    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:30.221332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:30.249301    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.249301    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:30.252441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:30.278998    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.278998    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:30.281997    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:30.312414    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.312414    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:30.318242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:30.357304    8452 logs.go:282] 0 containers: []
	W1216 06:21:30.357361    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:30.357422    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:30.357422    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:30.746929    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:30.746929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:30.795665    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:30.795746    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:30.878691    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:30.878691    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:30.913679    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:30.913679    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:31.010602    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:30.996811    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:30.998427    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.002658    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.003935    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:31.006305    6580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:33.516463    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:33.569431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:33.607164    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.607164    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:33.610177    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:33.645795    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.645795    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:33.649795    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:33.678786    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.678786    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:33.681789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:33.712090    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.712090    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:33.716091    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:33.749207    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.749207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:33.753072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:33.787281    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.787317    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:33.790849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:33.822451    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.822451    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:33.828957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:33.859827    8452 logs.go:282] 0 containers: []
	W1216 06:21:33.859881    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:33.859881    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:33.859929    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:33.922584    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:33.922584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:34.010640    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:33.999745    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.001327    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.002873    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.006569    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:34.007797    6731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:34.010640    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:34.010640    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:34.047937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:34.047937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:34.108035    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:34.108113    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:31.939854    4424 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1216 06:21:31.939854    4424 start.go:159] libmachine.API.Create for "kubenet-030800" (driver="docker")
	I1216 06:21:31.939854    4424 client.go:173] LocalClient.Create starting
	I1216 06:21:31.940866    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941130    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Decoding PEM data...
	I1216 06:21:31.941216    4424 main.go:143] libmachine: Parsing certificate...
	I1216 06:21:31.946190    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 06:21:32.002258    4424 cli_runner.go:211] docker network inspect kubenet-030800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 06:21:32.006251    4424 network_create.go:284] running [docker network inspect kubenet-030800] to gather additional debugging logs...
	I1216 06:21:32.006251    4424 cli_runner.go:164] Run: docker network inspect kubenet-030800
	W1216 06:21:32.057274    4424 cli_runner.go:211] docker network inspect kubenet-030800 returned with exit code 1
	I1216 06:21:32.057274    4424 network_create.go:287] error running [docker network inspect kubenet-030800]: docker network inspect kubenet-030800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-030800 not found
	I1216 06:21:32.057274    4424 network_create.go:289] output of [docker network inspect kubenet-030800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-030800 not found
	
	** /stderr **
	I1216 06:21:32.061267    4424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 06:21:32.137401    4424 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.168856    4424 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.184860    4424 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.200856    4424 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.216426    4424 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.232146    4424 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016d96b0}
	I1216 06:21:32.232146    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1216 06:21:32.235443    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	W1216 06:21:32.288644    4424 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800 returned with exit code 1
	W1216 06:21:32.288644    4424 network_create.go:149] failed to create docker network kubenet-030800 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1216 06:21:32.288644    4424 network_create.go:116] failed to create docker network kubenet-030800 192.168.94.0/24, will retry: subnet is taken
	I1216 06:21:32.308048    4424 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1216 06:21:32.321168    4424 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015f57d0}
	I1216 06:21:32.321265    4424 network_create.go:124] attempt to create docker network kubenet-030800 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1216 06:21:32.325637    4424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-030800 kubenet-030800
	I1216 06:21:32.469323    4424 network_create.go:108] docker network kubenet-030800 192.168.103.0/24 created
	I1216 06:21:32.469323    4424 kic.go:121] calculated static IP "192.168.103.2" for the "kubenet-030800" container
	I1216 06:21:32.483125    4424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 06:21:32.541557    4424 cli_runner.go:164] Run: docker volume create kubenet-030800 --label name.minikube.sigs.k8s.io=kubenet-030800 --label created_by.minikube.sigs.k8s.io=true
	I1216 06:21:32.608360    4424 oci.go:103] Successfully created a docker volume kubenet-030800
	I1216 06:21:32.611360    4424 cli_runner.go:164] Run: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1216 06:21:34.117036    4424 cli_runner.go:217] Completed: docker run --rm --name kubenet-030800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --entrypoint /usr/bin/test -v kubenet-030800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.5056549s)
	I1216 06:21:34.117036    4424 oci.go:107] Successfully prepared a docker volume kubenet-030800
	I1216 06:21:34.117036    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:21:34.117036    4424 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 06:21:34.121793    4424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 06:21:37.760556    7800 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:21:37.760556    7800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:21:37.761189    7800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:21:37.761753    7800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:21:37.761881    7800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:21:37.761881    7800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:21:37.764442    7800 out.go:252]   - Generating certificates and keys ...
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:21:37.764585    7800 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:21:37.765188    7800 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-030800 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1216 06:21:37.765339    7800 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:21:37.765955    7800 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:21:37.766018    7800 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:21:37.766124    7800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:21:37.766165    7800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:21:37.766271    7800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:21:37.766333    7800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:21:37.766397    7800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:21:37.766458    7800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:21:37.766458    7800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:21:37.770151    7800 out.go:252]   - Booting up control plane ...
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:21:37.770151    7800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:21:37.770817    7800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:21:37.770952    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:21:37.771091    7800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:21:37.771167    7800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:21:37.771225    7800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:21:37.771366    7800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:21:37.771366    7800 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.004327208s
	I1216 06:21:37.771902    7800 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:21:37.772247    7800 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1216 06:21:37.772484    7800 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:21:37.772735    7800 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:21:37.773067    7800 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.101943404s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.591910767s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002177662s
	I1216 06:21:37.773211    7800 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:21:37.773799    7800 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:21:37.773799    7800 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:21:37.774455    7800 kubeadm.go:319] [mark-control-plane] Marking the node bridge-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:21:37.774523    7800 kubeadm.go:319] [bootstrap-token] Using token: lrkd8c.ky3vlqagn7chac73
	I1216 06:21:37.777890    7800 out.go:252]   - Configuring RBAC rules ...
	I1216 06:21:37.777890    7800 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:21:37.778486    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:21:37.779084    7800 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:21:37.779666    7800 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:21:37.779696    7800 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:21:37.779696    7800 kubeadm.go:319] 
	I1216 06:21:37.779696    7800 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:21:37.780278    7800 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:21:37.780278    7800 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:21:37.780278    7800 kubeadm.go:319] 
	I1216 06:21:37.780278    7800 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:21:37.781243    7800 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--control-plane 
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:21:37.781243    7800 kubeadm.go:319] 
	I1216 06:21:37.781243    7800 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lrkd8c.ky3vlqagn7chac73 \
	I1216 06:21:37.781243    7800 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:21:37.782257    7800 cni.go:84] Creating CNI manager for "bridge"
	I1216 06:21:37.785969    7800 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W1216 06:21:35.013402    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:37.791788    7800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 06:21:37.806804    7800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 06:21:37.825807    7800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:37.829804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-030800 minikube.k8s.io/updated_at=2025_12_16T06_21_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=bridge-030800 minikube.k8s.io/primary=true
	I1216 06:21:37.839814    7800 ops.go:34] apiserver oom_adj: -16
	I1216 06:21:38.032186    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:38.534048    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.035804    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:39.534294    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:36.686452    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:36.704466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:36.733394    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.733394    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:36.737348    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:36.773510    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.773510    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:36.776509    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:36.805498    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.805498    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:36.809501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:36.845499    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.845499    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:36.849511    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:36.879646    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.879646    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:36.884108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:36.915408    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.915408    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:36.920279    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:36.952754    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.952754    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:36.955734    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:36.987884    8452 logs.go:282] 0 containers: []
	W1216 06:21:36.987884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:36.987884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:36.987884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:37.053646    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:37.053646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:37.093881    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:37.093881    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:37.179527    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:37.167265    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.168416    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.170284    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.173656    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:37.175713    6896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:37.179584    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:37.179628    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:37.206261    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:37.206297    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:39.778747    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:39.802598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:39.832179    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.832179    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:39.836704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:39.869121    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.869121    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:39.873774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:39.909668    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.909668    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:39.914691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:39.947830    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.947830    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:39.951594    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:40.034177    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:40.535099    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.034558    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:41.535126    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.034691    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:42.533593    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.035143    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:43.831113    7800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:21:44.554108    7800 kubeadm.go:1114] duration metric: took 6.7282073s to wait for elevateKubeSystemPrivileges
	I1216 06:21:44.554108    7800 kubeadm.go:403] duration metric: took 23.3439157s to StartCluster
	I1216 06:21:44.554108    7800 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.554108    7800 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:21:44.555899    7800 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:21:44.557179    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:21:44.557179    7800 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:21:44.557179    7800 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:21:44.557179    7800 addons.go:70] Setting storage-provisioner=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:239] Setting addon storage-provisioner=true in "bridge-030800"
	I1216 06:21:44.557179    7800 addons.go:70] Setting default-storageclass=true in profile "bridge-030800"
	I1216 06:21:44.557179    7800 host.go:66] Checking if "bridge-030800" exists ...
	I1216 06:21:44.557179    7800 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-030800"
	I1216 06:21:44.557179    7800 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.566903    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:44.910438    7800 out.go:179] * Verifying Kubernetes components...
	I1216 06:21:39.982557    8452 logs.go:282] 0 containers: []
	W1216 06:21:39.982557    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:39.986642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:40.018169    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.018169    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:40.021165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:40.051243    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.051243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:40.057090    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:40.084414    8452 logs.go:282] 0 containers: []
	W1216 06:21:40.084414    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:40.084414    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:40.084414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:40.144414    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:40.144414    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:40.179632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:40.179632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:40.269800    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:40.260303    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.261845    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.263158    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.264869    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:40.265909    7072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:40.270323    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:40.270323    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:40.298399    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:40.298399    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:42.860131    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:42.883733    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:42.914204    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.914204    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:42.918228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:42.950380    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.950459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:42.954335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:42.985562    8452 logs.go:282] 0 containers: []
	W1216 06:21:42.985562    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:42.988741    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:43.016773    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.016773    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:43.020957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:43.059042    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.059042    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:43.062546    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:43.091529    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.091529    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:43.095547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:43.122188    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.122188    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:43.126264    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:43.154603    8452 logs.go:282] 0 containers: []
	W1216 06:21:43.154667    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:43.154687    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:43.154709    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:43.217437    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:43.217437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:43.256550    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:43.256550    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:43.334672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:43.327815    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.329013    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.330406    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.331323    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:43.332443    7242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:43.334672    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:43.334672    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:43.363324    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:43.363324    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:44.625758    7800 addons.go:239] Setting addon default-storageclass=true in "bridge-030800"
	I1216 06:21:44.961765    7800 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:21:44.962159    7800 host.go:66] Checking if "bridge-030800" exists ...
	W1216 06:21:45.051007    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:45.413866    7800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:45.416342    7800 cli_runner.go:164] Run: docker container inspect bridge-030800 --format={{.State.Status}}
	I1216 06:21:45.428762    7800 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.428762    7800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:21:45.433231    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.481472    7800 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:45.481472    7800 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:21:45.485567    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:45.487870    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.534738    7800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:21:45.540734    7800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56269 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-030800\id_rsa Username:docker}
	I1216 06:21:45.651776    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:21:45.743561    7800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:21:45.947134    7800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:21:48.661269    7800 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.1264885s)
	I1216 06:21:48.661269    7800 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.2776091s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.1858261s)
	I1216 06:21:48.929431    7800 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.9822555s)
	I1216 06:21:48.933443    7800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-030800
	I1216 06:21:48.974829    7800 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:21:48.977844    7800 addons.go:530] duration metric: took 4.4206041s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:21:48.994296    7800 node_ready.go:35] waiting up to 15m0s for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 node_ready.go:49] node "bridge-030800" is "Ready"
	I1216 06:21:49.024312    7800 node_ready.go:38] duration metric: took 30.0163ms for node "bridge-030800" to be "Ready" ...
	I1216 06:21:49.024312    7800 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:21:49.030307    7800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.051593    7800 api_server.go:72] duration metric: took 4.4943521s to wait for apiserver process to appear ...
	I1216 06:21:49.051593    7800 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:21:49.051593    7800 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56268/healthz ...
	I1216 06:21:49.061499    7800 api_server.go:279] https://127.0.0.1:56268/healthz returned 200:
	ok
	I1216 06:21:49.063514    7800 api_server.go:141] control plane version: v1.34.2
	I1216 06:21:49.063514    7800 api_server.go:131] duration metric: took 11.9204ms to wait for apiserver health ...
	I1216 06:21:49.064510    7800 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:21:49.088115    7800 system_pods.go:59] 8 kube-system pods found
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.088115    7800 system_pods.go:61] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.088115    7800 system_pods.go:61] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.088115    7800 system_pods.go:61] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.088115    7800 system_pods.go:74] duration metric: took 23.6038ms to wait for pod list to return data ...
	I1216 06:21:49.088115    7800 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:21:49.094110    7800 default_sa.go:45] found service account: "default"
	I1216 06:21:49.094110    7800 default_sa.go:55] duration metric: took 5.9949ms for default service account to be created ...
	I1216 06:21:49.094110    7800 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:21:49.100097    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.100097    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.100097    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.100097    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.100097    7800 retry.go:31] will retry after 202.33386ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.170358    7800 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-030800" context rescaled to 1 replicas
	I1216 06:21:49.310950    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.310950    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.310950    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.310950    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.310950    7800 retry.go:31] will retry after 302.122926ms: missing components: kube-dns, kube-proxy
	I1216 06:21:49.630338    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630425    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:49.630577    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:49.630577    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:49.630663    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:49.630695    7800 retry.go:31] will retry after 447.973015ms: missing components: kube-dns, kube-proxy
	I1216 06:21:45.929428    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:45.950755    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:45.982605    8452 logs.go:282] 0 containers: []
	W1216 06:21:45.982605    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:45.987711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:46.020649    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.020649    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:46.024931    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:46.058836    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.058836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:46.066651    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:46.094860    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.094860    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:46.098689    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:46.127246    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.127246    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:46.130937    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:46.159519    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.159519    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:46.163609    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:46.195483    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.195483    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:46.199178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:46.229349    8452 logs.go:282] 0 containers: []
	W1216 06:21:46.229349    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:46.229349    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:46.229349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:46.292392    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:46.292392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:46.330903    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:46.330903    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:46.427283    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:46.411370    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.412421    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.413233    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.414909    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:46.415254    7423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:46.427283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:46.427283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:46.458647    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:46.458647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:49.005293    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:49.027308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:49.061499    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.061499    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:49.064510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:49.094110    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.094110    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:49.097105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:49.126749    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.126749    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:49.132748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:49.169344    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.169344    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:49.174332    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:49.209103    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.209103    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:49.214581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:49.248644    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.248644    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:49.251643    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:49.281632    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.281632    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:49.285645    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:49.316955    8452 logs.go:282] 0 containers: []
	W1216 06:21:49.316955    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:49.316955    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:49.317942    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:49.391656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:49.391738    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:49.432724    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:49.432724    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:49.523989    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:49.514359    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.515480    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.516405    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.518597    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:49.519653    7586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:49.523989    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:49.523989    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:49.552004    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:49.552004    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:48.467044    4424 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-030800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (14.3450525s)
	I1216 06:21:48.467044    4424 kic.go:203] duration metric: took 14.349809s to extract preloaded images to volume ...
	I1216 06:21:48.470844    4424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 06:21:48.730876    4424 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-16 06:21:48.710057733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 06:21:48.733867    4424 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 06:21:48.983392    4424 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-030800 --name kubenet-030800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-030800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-030800 --network kubenet-030800 --ip 192.168.103.2 --volume kubenet-030800:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1216 06:21:49.764686    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Running}}
	I1216 06:21:49.828590    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:49.890595    4424 cli_runner.go:164] Run: docker exec kubenet-030800 stat /var/lib/dpkg/alternatives/iptables
	I1216 06:21:50.004225    4424 oci.go:144] the created container "kubenet-030800" has a running status.
	I1216 06:21:50.005228    4424 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.057161    4424 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 06:21:50.141101    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:50.207656    4424 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 06:21:50.207656    4424 kic_runner.go:114] Args: [docker exec --privileged kubenet-030800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 06:21:50.326664    4424 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa...
	I1216 06:21:50.087090    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.087090    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:21:50.087090    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.087090    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.087090    7800 retry.go:31] will retry after 426.637768ms: missing components: kube-dns, kube-proxy
	I1216 06:21:50.538640    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:50.538640    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:50.538640    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:50.538640    7800 retry.go:31] will retry after 479.139187ms: missing components: kube-dns
	I1216 06:21:51.025065    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.025065    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.025145    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.025145    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:21:51.025193    7800 retry.go:31] will retry after 758.159415ms: missing components: kube-dns
	I1216 06:21:51.791088    7800 system_pods.go:86] 8 kube-system pods found
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-8s6v4" [0c24cddb-fc16-4f99-9f36-735e6190b514] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "coredns-66bc5c9577-tcbrk" [c580f0d3-4332-4573-ab7d-429e1fc0c639] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:21:51.791088    7800 system_pods.go:89] "etcd-bridge-030800" [3549f3f5-da9b-431b-845c-e6530ded60b1] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-apiserver-bridge-030800" [63f15ae2-8266-4f38-b58a-2f07b4075231] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-controller-manager-bridge-030800" [a169e8f5-738a-4f7a-8845-c944e74a1552] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-proxy-pbfkb" [94309880-f831-45e9-b646-c57685715931] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "kube-scheduler-bridge-030800" [2c48ee83-df6b-4392-8a83-81ee32d11abd] Running
	I1216 06:21:51.791088    7800 system_pods.go:89] "storage-provisioner" [a6eee8e0-ccce-46c4-a0f6-fbda3a8de273] Running
	I1216 06:21:51.791088    7800 system_pods.go:126] duration metric: took 2.6969413s to wait for k8s-apps to be running ...
	I1216 06:21:51.791088    7800 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:21:51.798336    7800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:21:51.818183    7800 system_svc.go:56] duration metric: took 27.0943ms WaitForService to wait for kubelet
	I1216 06:21:51.818183    7800 kubeadm.go:587] duration metric: took 7.2609035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:21:51.818183    7800 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:21:51.825244    7800 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:21:51.825244    7800 node_conditions.go:123] node cpu capacity is 16
	I1216 06:21:51.825244    7800 node_conditions.go:105] duration metric: took 7.0607ms to run NodePressure ...
	I1216 06:21:51.825244    7800 start.go:242] waiting for startup goroutines ...
	I1216 06:21:51.825244    7800 start.go:247] waiting for cluster config update ...
	I1216 06:21:51.825244    7800 start.go:256] writing updated cluster config ...
	I1216 06:21:51.833706    7800 ssh_runner.go:195] Run: rm -f paused
	I1216 06:21:51.841597    7800 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:21:51.851622    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:21:53.862268    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:52.109148    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:52.140748    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:52.186855    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.186855    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:52.193111    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:52.227511    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.227511    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:52.232508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:52.265331    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.265331    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:52.270635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:52.301130    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.301130    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:52.307669    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:52.342623    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.342623    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:52.347794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:52.387246    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.387246    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:52.392629    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:52.445055    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.445143    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:52.449252    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:52.480071    8452 logs.go:282] 0 containers: []
	W1216 06:21:52.480071    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:52.480071    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:52.480071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:52.547115    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:52.547115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:52.590630    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:52.590630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:52.690412    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:52.676923    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.680185    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.681831    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.683876    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:52.684642    7752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:52.690412    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:52.690412    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:52.718573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:52.718573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:52.546527    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:21:52.603159    4424 machine.go:94] provisionDockerMachine start ...
	I1216 06:21:52.606161    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.662674    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.679442    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.679519    4424 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 06:21:52.842464    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:52.842464    4424 ubuntu.go:182] provisioning hostname "kubenet-030800"
	I1216 06:21:52.846473    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:52.908771    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:52.908771    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:52.908771    4424 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-030800 && echo "kubenet-030800" | sudo tee /etc/hostname
	I1216 06:21:53.084692    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-030800
	
	I1216 06:21:53.088917    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.150284    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.150284    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.150284    4424 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-030800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-030800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-030800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 06:21:53.322772    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 06:21:53.322772    4424 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1216 06:21:53.322772    4424 ubuntu.go:190] setting up certificates
	I1216 06:21:53.322772    4424 provision.go:84] configureAuth start
	I1216 06:21:53.326658    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:53.379472    4424 provision.go:143] copyHostCerts
	I1216 06:21:53.379472    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1216 06:21:53.379472    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1216 06:21:53.379472    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1216 06:21:53.381506    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1216 06:21:53.381506    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1216 06:21:53.382025    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1216 06:21:53.383238    4424 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1216 06:21:53.383286    4424 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1216 06:21:53.383622    4424 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1216 06:21:53.384729    4424 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-030800 san=[127.0.0.1 192.168.103.2 kubenet-030800 localhost minikube]
	I1216 06:21:53.446404    4424 provision.go:177] copyRemoteCerts
	I1216 06:21:53.450578    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 06:21:53.453632    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.508049    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:53.625841    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 06:21:53.652177    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 06:21:53.678648    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 06:21:53.702593    4424 provision.go:87] duration metric: took 379.8156ms to configureAuth
	I1216 06:21:53.702593    4424 ubuntu.go:206] setting minikube options for container-runtime
	I1216 06:21:53.703116    4424 config.go:182] Loaded profile config "kubenet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:21:53.706020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:53.763080    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:53.763659    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:53.763659    4424 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1216 06:21:53.941197    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1216 06:21:53.941229    4424 ubuntu.go:71] root file system type: overlay
	I1216 06:21:53.941395    4424 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1216 06:21:53.945310    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.000318    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.000318    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.000318    4424 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1216 06:21:54.194977    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1216 06:21:54.198986    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:54.262183    4424 main.go:143] libmachine: Using SSH client type: native
	I1216 06:21:54.262873    4424 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff78c48fd00] 0x7ff78c492860 <nil>  [] 0s} 127.0.0.1 56386 <nil> <nil>}
	I1216 06:21:54.262912    4424 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1216 06:21:55.764091    4424 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-16 06:21:54.174803160 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1216 06:21:55.764091    4424 machine.go:97] duration metric: took 3.1608879s to provisionDockerMachine
	I1216 06:21:55.764091    4424 client.go:176] duration metric: took 23.8239056s to LocalClient.Create
	I1216 06:21:55.764091    4424 start.go:167] duration metric: took 23.8239056s to libmachine.API.Create "kubenet-030800"
	I1216 06:21:55.764091    4424 start.go:293] postStartSetup for "kubenet-030800" (driver="docker")
	I1216 06:21:55.764091    4424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 06:21:55.769330    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 06:21:55.774020    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:55.832721    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:55.960433    4424 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 06:21:55.968801    4424 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 06:21:55.968801    4424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1216 06:21:55.968801    4424 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1216 06:21:55.969505    4424 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem -> 117042.pem in /etc/ssl/certs
	I1216 06:21:55.973822    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 06:21:55.985938    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /etc/ssl/certs/117042.pem (1708 bytes)
	I1216 06:21:56.011522    4424 start.go:296] duration metric: took 247.4281ms for postStartSetup
	I1216 06:21:56.016962    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.071317    4424 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\config.json ...
	I1216 06:21:56.078704    4424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 06:21:56.082131    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	W1216 06:21:55.087921    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): Get "https://127.0.0.1:55116/api/v1/nodes/no-preload-686300": EOF
	I1216 06:21:56.146380    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.278810    4424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 06:21:56.289463    4424 start.go:128] duration metric: took 24.3526481s to createHost
	I1216 06:21:56.289463    4424 start.go:83] releasing machines lock for "kubenet-030800", held for 24.352923s
	I1216 06:21:56.293770    4424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-030800
	I1216 06:21:56.349762    4424 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1216 06:21:56.354527    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.355718    4424 ssh_runner.go:195] Run: cat /version.json
	I1216 06:21:56.359207    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:21:56.419217    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.420010    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:21:56.548149    4424 ssh_runner.go:195] Run: systemctl --version
	W1216 06:21:56.549226    4424 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1216 06:21:56.567514    4424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 06:21:56.574755    4424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 06:21:56.580435    4424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 06:21:56.633416    4424 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 06:21:56.633416    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:56.633416    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:56.633416    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:56.657618    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1216 06:21:56.658090    4424 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1216 06:21:56.658134    4424 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1216 06:21:56.678200    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1216 06:21:56.690681    4424 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1216 06:21:56.695430    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1216 06:21:56.714310    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.735757    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1216 06:21:56.754647    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1216 06:21:56.771876    4424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 06:21:56.790078    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1216 06:21:56.810936    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1216 06:21:56.828529    4424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1216 06:21:56.859717    4424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 06:21:56.876724    4424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 06:21:56.891719    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.036224    4424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1216 06:21:57.185425    4424 start.go:496] detecting cgroup driver to use...
	I1216 06:21:57.185522    4424 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 06:21:57.190092    4424 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1216 06:21:57.213249    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.239566    4424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 06:21:57.303231    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 06:21:57.326154    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1216 06:21:57.344861    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 06:21:57.372889    4424 ssh_runner.go:195] Run: which cri-dockerd
	I1216 06:21:57.386009    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1216 06:21:57.401220    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1216 06:21:57.422607    4424 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1216 06:21:57.590920    4424 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1216 06:21:57.727211    4424 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1216 06:21:57.727211    4424 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1216 06:21:57.751771    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1216 06:21:57.772961    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:57.912458    4424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1216 06:21:58.834645    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 06:21:58.856232    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1216 06:21:58.880727    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:58.906712    4424 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1216 06:21:59.052553    4424 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1216 06:21:59.194941    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.333924    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1216 06:21:59.357147    4424 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1216 06:21:59.379570    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:21:59.513788    4424 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1216 06:21:59.631489    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1216 06:21:59.649336    4424 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1216 06:21:59.653752    4424 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1216 06:21:59.660755    4424 start.go:564] Will wait 60s for crictl version
	I1216 06:21:59.665368    4424 ssh_runner.go:195] Run: which crictl
	I1216 06:21:59.677200    4424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1216 06:21:59.717428    4424 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1216 06:21:59.720622    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1216 06:21:59.765567    4424 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	W1216 06:21:55.865199    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	W1216 06:21:58.365962    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:21:55.273773    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:55.297441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:55.334351    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.334404    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:55.338338    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:55.372344    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.372344    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:55.375335    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:55.429711    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.429711    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:55.432707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:55.463415    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.463415    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:55.466882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:55.495871    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.495871    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:55.499782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:55.530135    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.530135    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:55.534032    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:55.561956    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.561956    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:55.567456    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:55.598684    8452 logs.go:282] 0 containers: []
	W1216 06:21:55.598684    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:55.598684    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:55.598684    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:55.661553    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:55.661553    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:55.699330    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:55.699330    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:55.806271    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:55.790032    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.792694    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1216 06:34:55.735370   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
	E1216 06:21:55.794430    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.795276    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:55.800381    7916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:55.806271    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:55.806271    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:55.839937    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:55.839937    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.400590    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:21:58.422580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:21:58.453081    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.453081    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:21:58.457176    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:21:58.482739    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.482739    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:21:58.486288    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:21:58.516198    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.516198    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:21:58.520374    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:21:58.550134    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.550134    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:21:58.553679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:21:58.585815    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.585815    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:21:58.589532    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:21:58.620180    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.620180    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:21:58.626021    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:21:58.656946    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.656946    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:21:58.659942    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:21:58.687618    8452 logs.go:282] 0 containers: []
	W1216 06:21:58.687618    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:21:58.687618    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:21:58.687618    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:21:58.777493    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:21:58.766575    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.767780    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.768598    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.770959    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:21:58.772697    8077 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:21:58.777493    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:21:58.777493    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:21:58.805676    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:21:58.805676    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:21:58.860391    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:21:58.860391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:21:58.925444    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:21:58.925444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:21:59.807579    4424 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.3 ...
	I1216 06:21:59.810667    4424 cli_runner.go:164] Run: docker exec -t kubenet-030800 dig +short host.docker.internal
	I1216 06:21:59.962844    4424 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1216 06:21:59.967733    4424 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1216 06:21:59.974503    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:21:59.995371    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:00.053937    4424 kubeadm.go:884] updating cluster {Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 06:22:00.053937    4424 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 06:22:00.057874    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.094105    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.094105    4424 docker.go:621] Images already preloaded, skipping extraction
	I1216 06:22:00.097332    4424 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1216 06:22:00.129189    4424 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1216 06:22:00.129225    4424 cache_images.go:86] Images are preloaded, skipping loading
	I1216 06:22:00.129280    4424 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 docker true true} ...
	I1216 06:22:00.129486    4424 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-030800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 06:22:00.132350    4424 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1216 06:22:00.208072    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:22:00.208072    4424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 06:22:00.208072    4424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-030800 NodeName:kubenet-030800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 06:22:00.208072    4424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-030800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 06:22:00.213204    4424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 06:22:00.225061    4424 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 06:22:00.229012    4424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 06:22:00.242127    4424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (339 bytes)
	I1216 06:22:00.258591    4424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 06:22:00.278876    4424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1216 06:22:00.305788    4424 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1216 06:22:00.315868    4424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 06:22:00.339710    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:22:00.483171    4424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:22:00.505844    4424 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800 for IP: 192.168.103.2
	I1216 06:22:00.505844    4424 certs.go:195] generating shared ca certs ...
	I1216 06:22:00.505844    4424 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.506501    4424 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1216 06:22:00.507023    4424 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1216 06:22:00.507484    4424 certs.go:257] generating profile certs ...
	I1216 06:22:00.507484    4424 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key
	I1216 06:22:00.507484    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt with IP's: []
	I1216 06:22:00.552695    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt ...
	I1216 06:22:00.552695    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.crt: {Name:mk4783bd7e1619c0ea341eaca75005ddd88d5b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.553960    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key ...
	I1216 06:22:00.553960    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\client.key: {Name:mk427571c1896a50b896e76c58a633b5512ad44e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.555335    4424 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8
	I1216 06:22:00.555661    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1216 06:22:00.581299    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 ...
	I1216 06:22:00.581299    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8: {Name:mk9cb22362f0ba7f5c0b5c6877c5c2e8d72eb278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.582304    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 ...
	I1216 06:22:00.582304    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8: {Name:mk2a3e21d232de7f748cffa074c96be0850cc9f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.583303    4424 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt
	I1216 06:22:00.599920    4424 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key.cbfeaeb8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key
	I1216 06:22:00.600703    4424 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key
	I1216 06:22:00.601353    4424 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt with IP's: []
	I1216 06:22:00.664564    4424 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt ...
	I1216 06:22:00.664564    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt: {Name:mk02eb62f20a18ad60f930ae30a248a87b7cb658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.665010    4424 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key ...
	I1216 06:22:00.665010    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key: {Name:mk8a8b2a6c6b1b3e2e2cc574e01303d6680bf793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:00.680006    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem (1338 bytes)
	W1216 06:22:00.680554    4424 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704_empty.pem, impossibly tiny 0 bytes
	I1216 06:22:00.680554    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1216 06:22:00.680770    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1216 06:22:00.681404    4424 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem (1708 bytes)
	I1216 06:22:00.683052    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 06:22:00.710388    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 06:22:00.737370    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 06:22:00.766290    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 06:22:00.790943    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 06:22:00.815072    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 06:22:00.839330    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 06:22:00.863340    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-030800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 06:22:00.921806    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 06:22:00.945068    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11704.pem --> /usr/share/ca-certificates/11704.pem (1338 bytes)
	I1216 06:22:00.972351    4424 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\117042.pem --> /usr/share/ca-certificates/117042.pem (1708 bytes)
	I1216 06:22:00.998813    4424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 06:22:01.025404    4424 ssh_runner.go:195] Run: openssl version
	I1216 06:22:01.039534    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.056142    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 06:22:01.077227    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.085140    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:28 /usr/share/ca-certificates/minikubeCA.pem
	I1216 06:22:01.089133    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	W1216 06:22:03.871305    2100 node_ready.go:55] error getting node "no-preload-686300" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1216 06:22:03.871305    2100 node_ready.go:38] duration metric: took 6m0.0002926s for node "no-preload-686300" to be "Ready" ...
	I1216 06:22:03.874339    2100 out.go:203] 
	W1216 06:22:03.877050    2100 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1216 06:22:03.877050    2100 out.go:285] * 
	W1216 06:22:03.879403    2100 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 06:22:03.881203    2100 out.go:203] 
	W1216 06:22:00.861344    7800 pod_ready.go:104] pod "coredns-66bc5c9577-8s6v4" is not "Ready", error: <nil>
	I1216 06:22:01.860562    7800 pod_ready.go:99] pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8s6v4" not found
	I1216 06:22:01.860562    7800 pod_ready.go:86] duration metric: took 10.0087717s for pod "coredns-66bc5c9577-8s6v4" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:01.860562    7800 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tcbrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:03.875170    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:01.467725    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:01.493737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:01.526377    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.526426    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:01.530527    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:01.563582    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.563582    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:01.567007    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:01.606460    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.606460    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:01.610155    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:01.647965    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.647965    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:01.652309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:01.691011    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.691011    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:01.695466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:01.736509    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.736509    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:01.739991    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:01.773642    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.773642    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:01.777617    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:01.811025    8452 logs.go:282] 0 containers: []
	W1216 06:22:01.811141    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:01.811141    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:01.811141    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:01.881533    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:01.881533    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.919632    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:01.919632    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:02.020491    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:02.009380    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.010794    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.014129    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.015532    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:02.016602    8248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:02.020538    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:02.020587    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:02.059933    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:02.060031    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:04.620893    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:04.645761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:04.680596    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.680596    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:04.684611    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:04.712607    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.712607    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:04.716737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:04.744218    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.744218    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:04.748501    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:04.787600    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.787668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:04.791349    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:04.825500    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.825547    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:04.829098    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:04.878465    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.878465    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:04.881466    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:04.910168    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.910168    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:04.914167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:04.949752    8452 logs.go:282] 0 containers: []
	W1216 06:22:04.949810    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:04.949810    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:04.949873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:01.143585    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 06:22:01.161031    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 06:22:01.179456    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.197251    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11704.pem /etc/ssl/certs/11704.pem
	I1216 06:22:01.216028    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.226660    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:45 /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.230697    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11704.pem
	I1216 06:22:01.278644    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 06:22:01.297647    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11704.pem /etc/ssl/certs/51391683.0
	I1216 06:22:01.317326    4424 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.341360    4424 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/117042.pem /etc/ssl/certs/117042.pem
	I1216 06:22:01.367643    4424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.377139    4424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:45 /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.383754    4424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117042.pem
	I1216 06:22:01.440843    4424 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 06:22:01.457977    4424 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/117042.pem /etc/ssl/certs/3ec20f2e.0
	I1216 06:22:01.476683    4424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 06:22:01.483599    4424 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 06:22:01.484303    4424 kubeadm.go:401] StartCluster: {Name:kubenet-030800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kubenet-030800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 06:22:01.490132    4424 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1216 06:22:01.529050    4424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 06:22:01.545461    4424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 06:22:01.559986    4424 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1216 06:22:01.564509    4424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 06:22:01.575681    4424 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 06:22:01.575681    4424 kubeadm.go:158] found existing configuration files:
	
	I1216 06:22:01.581349    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 06:22:01.593595    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 06:22:01.599386    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 06:22:01.618969    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 06:22:01.633516    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 06:22:01.638266    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 06:22:01.656598    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 06:22:01.670398    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 06:22:01.674972    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 06:22:01.695466    4424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 06:22:01.709055    4424 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 06:22:01.713665    4424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 06:22:01.733357    4424 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 06:22:01.884136    4424 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1216 06:22:01.891445    4424 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1216 06:22:01.994223    4424 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1216 06:22:06.379758    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	W1216 06:22:08.874715    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:04.987656    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:04.987703    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:05.093013    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:05.076970    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.078109    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.079394    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.082557    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:05.084502    8409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:05.093013    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:05.093013    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:05.148503    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:05.148503    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:05.222357    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:05.222357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:07.791130    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:07.816699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:07.846890    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.846890    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:07.850551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:07.885179    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.885179    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:07.889622    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:07.920925    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.920925    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:07.925517    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:07.955043    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.955043    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:07.959825    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:07.988928    8452 logs.go:282] 0 containers: []
	W1216 06:22:07.988928    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:07.993735    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:08.025335    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.025335    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:08.031801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:08.063231    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.063231    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:08.068525    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:08.106217    8452 logs.go:282] 0 containers: []
	W1216 06:22:08.106217    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:08.106217    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:08.106217    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:08.173411    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:08.173411    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:08.241764    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:08.241764    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:08.282741    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:08.282741    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:08.376141    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:08.365700    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.366812    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.368081    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.370681    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:08.372107    8601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:08.376181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:08.376246    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:22:10.875960    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	W1216 06:22:13.371029    7800 pod_ready.go:104] pod "coredns-66bc5c9577-tcbrk" is not "Ready", error: <nil>
	I1216 06:22:13.873624    7800 pod_ready.go:94] pod "coredns-66bc5c9577-tcbrk" is "Ready"
	I1216 06:22:13.873624    7800 pod_ready.go:86] duration metric: took 12.0128951s for pod "coredns-66bc5c9577-tcbrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.879094    7800 pod_ready.go:83] waiting for pod "etcd-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.889705    7800 pod_ready.go:94] pod "etcd-bridge-030800" is "Ready"
	I1216 06:22:13.889705    7800 pod_ready.go:86] duration metric: took 10.6111ms for pod "etcd-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.893578    7800 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.912416    7800 pod_ready.go:94] pod "kube-apiserver-bridge-030800" is "Ready"
	I1216 06:22:13.912416    7800 pod_ready.go:86] duration metric: took 18.8376ms for pod "kube-apiserver-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:13.917120    7800 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.068093    7800 pod_ready.go:94] pod "kube-controller-manager-bridge-030800" is "Ready"
	I1216 06:22:14.068093    7800 pod_ready.go:86] duration metric: took 150.9707ms for pod "kube-controller-manager-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.266154    7800 pod_ready.go:83] waiting for pod "kube-proxy-pbfkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:14.666596    7800 pod_ready.go:94] pod "kube-proxy-pbfkb" is "Ready"
	I1216 06:22:14.666596    7800 pod_ready.go:86] duration metric: took 400.436ms for pod "kube-proxy-pbfkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:10.906574    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:10.929977    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:10.963006    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.963006    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:10.966334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:10.995517    8452 logs.go:282] 0 containers: []
	W1216 06:22:10.995517    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:10.998887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:11.027737    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.027771    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:11.034529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:11.070221    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.070221    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:11.075447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:11.105575    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.105575    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:11.108569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:11.143549    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.143549    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:11.146562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:11.178034    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.178034    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:11.181411    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:11.211522    8452 logs.go:282] 0 containers: []
	W1216 06:22:11.211522    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:11.211522    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:11.211522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:11.244289    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:11.244289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:11.295870    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:11.295870    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:11.359418    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:11.360418    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:11.394416    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:11.394416    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:11.489247    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:11.480642    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.481500    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.484037    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.485215    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:11.486083    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:13.994214    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:14.016691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:14.049641    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.049641    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:14.053607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:14.088893    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.088893    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:14.092847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:14.131857    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.131857    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:14.135845    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:14.168503    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.168503    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:14.172477    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:14.200948    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.200948    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:14.204642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:14.234975    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.234975    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:14.238802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:14.274052    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.274107    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:14.277642    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:14.306199    8452 logs.go:282] 0 containers: []
	W1216 06:22:14.306199    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:14.306199    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:14.306199    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:14.374972    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:14.374972    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:14.411356    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:14.411356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:14.498252    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:14.489335    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.490502    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.491815    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.493244    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:14.494218    8921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:14.498283    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:14.498283    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:14.528112    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:14.528112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:14.872200    7800 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:15.267078    7800 pod_ready.go:94] pod "kube-scheduler-bridge-030800" is "Ready"
	I1216 06:22:15.267078    7800 pod_ready.go:86] duration metric: took 394.8723ms for pod "kube-scheduler-bridge-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:15.267078    7800 pod_ready.go:40] duration metric: took 23.4251556s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:15.362849    7800 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:22:15.367720    7800 out.go:179] * Done! kubectl is now configured to use "bridge-030800" cluster and "default" namespace by default
	I1216 06:22:17.092050    4424 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 06:22:17.092050    4424 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 06:22:17.093065    4424 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 06:22:17.093065    4424 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 06:22:17.096059    4424 out.go:252]   - Generating certificates and keys ...
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 06:22:17.096059    4424 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-030800 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 06:22:17.097054    4424 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 06:22:17.098050    4424 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 06:22:17.098050    4424 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 06:22:17.099055    4424 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 06:22:17.099055    4424 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 06:22:17.102055    4424 out.go:252]   - Booting up control plane ...
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 06:22:17.102055    4424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 06:22:17.103056    4424 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 06:22:17.104058    4424 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 06:22:17.104058    4424 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.507351804s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 06:22:17.104058    4424 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.957344338s
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.90080548s
	I1216 06:22:17.105057    4424 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002224001s
	I1216 06:22:17.106067    4424 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 06:22:17.106067    4424 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 06:22:17.106067    4424 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 06:22:17.107057    4424 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-030800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 06:22:17.107057    4424 kubeadm.go:319] [bootstrap-token] Using token: rs8etp.b2dh1vgtia9jcvb4
	I1216 06:22:17.081041    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:17.103056    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:17.137059    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.137059    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:17.141064    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:17.172640    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.172640    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:17.176638    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:17.210910    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.210910    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:17.215347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:17.248986    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.248986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:17.252989    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:17.287415    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.287415    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:17.293572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:17.324098    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.324098    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:17.330062    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:17.366512    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.366512    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:17.370101    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:17.402400    8452 logs.go:282] 0 containers: []
	W1216 06:22:17.402400    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:17.402400    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:17.402400    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:17.455027    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:17.455027    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:17.513029    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:17.513029    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:17.548022    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:17.548022    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:17.645629    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:17.632911    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.634385    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.635629    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.637461    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:17.638864    9103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:17.645629    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:17.645629    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:17.110053    4424 out.go:252]   - Configuring RBAC rules ...
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 06:22:17.110053    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 06:22:17.111060    4424 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 06:22:17.111060    4424 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 06:22:17.111060    4424 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 06:22:17.111060    4424 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 06:22:17.111060    4424 kubeadm.go:319] 
	I1216 06:22:17.111060    4424 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 06:22:17.111060    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 06:22:17.112052    4424 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 06:22:17.112052    4424 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 06:22:17.112052    4424 kubeadm.go:319] 
	I1216 06:22:17.112052    4424 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 06:22:17.113053    4424 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 06:22:17.113053    4424 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 06:22:17.113053    4424 kubeadm.go:319] 
	I1216 06:22:17.113053    4424 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 06:22:17.113053    4424 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 06:22:17.113053    4424 kubeadm.go:319] 
	I1216 06:22:17.113053    4424 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rs8etp.b2dh1vgtia9jcvb4 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--control-plane 
	I1216 06:22:17.114052    4424 kubeadm.go:319] 
	I1216 06:22:17.114052    4424 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 06:22:17.114052    4424 kubeadm.go:319] 
	I1216 06:22:17.114052    4424 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rs8etp.b2dh1vgtia9jcvb4 \
	I1216 06:22:17.114052    4424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:fe481d20f756e99f401343491acedb636344aa6b701bd55f35aa24412c3f05b7 
	I1216 06:22:17.114052    4424 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1216 06:22:17.114052    4424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 06:22:17.122049    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:17.122049    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-030800 minikube.k8s.io/updated_at=2025_12_16T06_22_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=kubenet-030800 minikube.k8s.io/primary=true
	I1216 06:22:17.134054    4424 ops.go:34] apiserver oom_adj: -16
	I1216 06:22:17.253989    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:17.753536    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:18.254825    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:18.755186    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:19.255440    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:19.754492    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:20.256463    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:20.753254    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.253896    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.753097    4424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 06:22:21.858877    4424 kubeadm.go:1114] duration metric: took 4.7437541s to wait for elevateKubeSystemPrivileges
	I1216 06:22:21.858877    4424 kubeadm.go:403] duration metric: took 20.3742909s to StartCluster
	I1216 06:22:21.858877    4424 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:21.858877    4424 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 06:22:21.861003    4424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 06:22:21.861972    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 06:22:21.861972    4424 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1216 06:22:21.861972    4424 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 06:22:21.861972    4424 addons.go:70] Setting storage-provisioner=true in profile "kubenet-030800"
	I1216 06:22:21.861972    4424 addons.go:239] Setting addon storage-provisioner=true in "kubenet-030800"
	I1216 06:22:21.861972    4424 addons.go:70] Setting default-storageclass=true in profile "kubenet-030800"
	I1216 06:22:21.861972    4424 config.go:182] Loaded profile config "kubenet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 06:22:21.861972    4424 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-030800"
	I1216 06:22:21.861972    4424 host.go:66] Checking if "kubenet-030800" exists ...
	I1216 06:22:21.864167    4424 out.go:179] * Verifying Kubernetes components...
	I1216 06:22:21.875224    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:21.875857    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:21.875857    4424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 06:22:21.939068    4424 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 06:22:21.939740    4424 addons.go:239] Setting addon default-storageclass=true in "kubenet-030800"
	I1216 06:22:21.939740    4424 host.go:66] Checking if "kubenet-030800" exists ...
	I1216 06:22:21.942493    4424 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:22:21.942493    4424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 06:22:21.947611    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:21.951961    4424 cli_runner.go:164] Run: docker container inspect kubenet-030800 --format={{.State.Status}}
	I1216 06:22:22.001257    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:22:22.003241    4424 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 06:22:22.003241    4424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 06:22:22.006248    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:22.070295    4424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56386 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-030800\id_rsa Username:docker}
	I1216 06:22:22.425928    4424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 06:22:22.444230    4424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 06:22:22.451290    4424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 06:22:22.540661    4424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 06:22:24.151685    4424 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.7257338s)
	I1216 06:22:24.151837    4424 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1216 06:22:24.529871    4424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.0785053s)
	I1216 06:22:24.529983    4424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.0856125s)
	I1216 06:22:24.530029    4424 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.9893406s)
	I1216 06:22:24.535621    4424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-030800
	I1216 06:22:24.547997    4424 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 06:22:20.178315    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:20.202308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:20.231344    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.231344    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:20.236317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:20.279459    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.279459    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:20.283465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:20.322463    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.322463    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:20.327465    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:20.366466    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.366466    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:20.371478    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:20.409468    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.409468    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:20.413471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:20.447432    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.447432    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:20.451099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:20.486103    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.486103    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:20.490094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:20.530098    8452 logs.go:282] 0 containers: []
	W1216 06:22:20.530098    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:20.530098    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:20.530098    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:20.557089    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:20.557089    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:20.606234    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:20.607239    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:20.667498    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:20.667498    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:20.703674    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:20.703674    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:20.796605    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:20.783957    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.785904    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.787175    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.788215    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:20.789759    9270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.300916    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:23.324266    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:23.355598    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.355598    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:23.359141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:23.390554    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.390644    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:23.394340    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:23.423019    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.423019    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:23.426772    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:23.456953    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.457021    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:23.460762    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:23.491477    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.491477    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:23.495183    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:23.527107    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.527107    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:23.531577    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:23.559306    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.559306    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:23.563381    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:23.592615    8452 logs.go:282] 0 containers: []
	W1216 06:22:23.592615    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:23.592615    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:23.592615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:23.630103    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:23.630103    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:23.719384    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:23.707330    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.708530    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.709675    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.711409    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:23.712311    9414 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:23.719514    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:23.719546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:23.746097    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:23.746097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:23.807727    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:23.807727    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:24.550004    4424 addons.go:530] duration metric: took 2.6879945s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 06:22:24.591996    4424 node_ready.go:35] waiting up to 15m0s for node "kubenet-030800" to be "Ready" ...
	I1216 06:22:24.646202    4424 node_ready.go:49] node "kubenet-030800" is "Ready"
	I1216 06:22:24.646202    4424 node_ready.go:38] duration metric: took 54.2051ms for node "kubenet-030800" to be "Ready" ...
	I1216 06:22:24.646202    4424 api_server.go:52] waiting for apiserver process to appear ...
	I1216 06:22:24.652200    4424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:24.721472    4424 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-030800" context rescaled to 1 replicas
	I1216 06:22:24.735392    4424 api_server.go:72] duration metric: took 2.87338s to wait for apiserver process to appear ...
	I1216 06:22:24.735392    4424 api_server.go:88] waiting for apiserver healthz status ...
	I1216 06:22:24.735392    4424 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56385/healthz ...
	I1216 06:22:24.821241    4424 api_server.go:279] https://127.0.0.1:56385/healthz returned 200:
	ok
	I1216 06:22:24.825583    4424 api_server.go:141] control plane version: v1.34.2
	I1216 06:22:24.825583    4424 api_server.go:131] duration metric: took 90.1899ms to wait for apiserver health ...
	I1216 06:22:24.825583    4424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 06:22:24.832936    4424 system_pods.go:59] 8 kube-system pods found
	I1216 06:22:24.833022    4424 system_pods.go:61] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.833022    4424 system_pods.go:61] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.833022    4424 system_pods.go:61] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:24.833022    4424 system_pods.go:61] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:24.833022    4424 system_pods.go:61] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:24.833131    4424 system_pods.go:74] duration metric: took 7.4392ms to wait for pod list to return data ...
	I1216 06:22:24.833131    4424 default_sa.go:34] waiting for default service account to be created ...
	I1216 06:22:24.838156    4424 default_sa.go:45] found service account: "default"
	I1216 06:22:24.838156    4424 default_sa.go:55] duration metric: took 5.0253ms for default service account to be created ...
	I1216 06:22:24.838156    4424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 06:22:24.844228    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:24.844228    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.844228    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:24.844228    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:24.844228    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:24.844228    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:24.844228    4424 retry.go:31] will retry after 236.325715ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.105694    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.105749    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.105770    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.105770    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.105770    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.105770    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.105848    4424 retry.go:31] will retry after 372.640753ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.532382    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.532482    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.532513    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.532513    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.532513    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.532587    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.532611    4424 retry.go:31] will retry after 313.138834ms: missing components: kube-dns, kube-proxy
	I1216 06:22:25.853141    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:25.853661    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.853661    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:25.853661    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:25.853661    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 06:22:25.853715    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 06:22:25.853715    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 06:22:25.853777    4424 retry.go:31] will retry after 472.942865ms: missing components: kube-dns, kube-proxy
	I1216 06:22:26.382913    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:26.404112    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:26.436722    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.436722    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:26.440749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:26.470877    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.470877    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:26.474941    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:26.503887    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.503950    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:26.508216    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:26.538317    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.538317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:26.542754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:26.571126    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.571189    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:26.574883    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:26.604762    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.604762    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:26.608705    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:26.637404    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.637444    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:26.641214    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:26.669720    8452 logs.go:282] 0 containers: []
	W1216 06:22:26.669720    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:26.669720    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:26.669720    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:26.707289    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:26.707289    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:26.791357    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:26.780820    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.781959    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.783103    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.784493    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:26.786188    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:26.791357    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:26.791357    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:26.817227    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:26.817227    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:26.865832    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:26.865832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.436231    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:29.459817    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:29.493134    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.493186    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:29.497118    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:29.526722    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.526722    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:29.531481    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:29.561672    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.561718    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:29.566882    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:29.595896    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.595947    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:29.599655    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:29.628575    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.628661    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:29.632644    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:29.660164    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.660164    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:29.663829    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:29.694413    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.694413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:29.698152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:29.725286    8452 logs.go:282] 0 containers: []
	W1216 06:22:29.725286    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:29.725355    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:29.725355    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:29.787721    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:29.787721    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:29.828376    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:29.828376    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:29.916249    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:29.905975    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.907149    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.909570    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.910484    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:29.911901    9751 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:29.916249    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:29.916249    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:29.942276    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:29.942276    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:26.336069    4424 system_pods.go:86] 8 kube-system pods found
	I1216 06:22:26.336069    4424 system_pods.go:89] "coredns-66bc5c9577-8qrgg" [06fec398-20d6-4b40-8f84-e7d91a397616] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:26.336069    4424 system_pods.go:89] "coredns-66bc5c9577-w7zmc" [5bbb0d62-9c71-4104-a191-712681305923] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 06:22:26.336069    4424 system_pods.go:89] "etcd-kubenet-030800" [c13dfb94-e84b-42e9-87d5-707a44382f0b] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-apiserver-kubenet-030800" [78d16a0f-6755-49a0-9d8e-aaf94b1e6dae] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-controller-manager-kubenet-030800" [747285ba-358e-4ee7-aaef-c691153745fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-proxy-5b9l9" [4e89a1a5-9131-40a3-8815-ba145d6dca20] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "kube-scheduler-kubenet-030800" [e128ef7b-93bb-4990-a09c-54f7fa5b71d6] Running
	I1216 06:22:26.336069    4424 system_pods.go:89] "storage-provisioner" [7441b5d6-7e00-46c1-bdb8-6d0c20456a21] Running
	I1216 06:22:26.336069    4424 system_pods.go:126] duration metric: took 1.4978916s to wait for k8s-apps to be running ...
	I1216 06:22:26.336069    4424 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 06:22:26.342244    4424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 06:22:26.368294    4424 system_svc.go:56] duration metric: took 32.1861ms WaitForService to wait for kubelet
	I1216 06:22:26.368345    4424 kubeadm.go:587] duration metric: took 4.5062595s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 06:22:26.368345    4424 node_conditions.go:102] verifying NodePressure condition ...
	I1216 06:22:26.376647    4424 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1216 06:22:26.376691    4424 node_conditions.go:123] node cpu capacity is 16
	I1216 06:22:26.376745    4424 node_conditions.go:105] duration metric: took 8.3456ms to run NodePressure ...
	I1216 06:22:26.376745    4424 start.go:242] waiting for startup goroutines ...
	I1216 06:22:26.376745    4424 start.go:247] waiting for cluster config update ...
	I1216 06:22:26.376795    4424 start.go:256] writing updated cluster config ...
	I1216 06:22:26.382913    4424 ssh_runner.go:195] Run: rm -f paused
	I1216 06:22:26.391122    4424 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:26.399112    4424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:28.410987    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	W1216 06:22:30.912607    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	I1216 06:22:32.497361    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:32.517362    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:32.549841    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.549912    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:32.553592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:32.582070    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.582070    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:32.585068    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:32.612095    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.612095    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:32.615889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:32.644953    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.644953    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:32.649025    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:32.676348    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.676429    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:32.680134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:32.708040    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.708040    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:32.712034    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:32.745789    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.745789    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:32.752533    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:32.781449    8452 logs.go:282] 0 containers: []
	W1216 06:22:32.781504    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:32.781504    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:32.781504    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:32.843135    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:32.843135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:32.881564    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:32.881564    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:32.982597    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:32.971304    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.973701    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.974519    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977149    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:32.977986    9924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:32.982597    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:32.982597    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:33.013212    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:33.013212    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 06:22:33.410898    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	W1216 06:22:35.912070    4424 pod_ready.go:104] pod "coredns-66bc5c9577-8qrgg" is not "Ready", error: <nil>
	I1216 06:22:35.578218    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:35.601163    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:35.629786    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.629786    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:35.634440    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:35.663168    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.663168    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:35.667699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:35.699050    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.699050    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:35.703224    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:35.736149    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.736149    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:35.741542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:35.772450    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.772450    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:35.776692    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:35.804150    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.804150    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:35.808799    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:35.837871    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.837871    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:35.841100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:35.870769    8452 logs.go:282] 0 containers: []
	W1216 06:22:35.870769    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:35.870769    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:35.870769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:35.934803    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:35.934803    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:35.973201    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:35.973201    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:36.070057    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:36.056811   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.057654   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.060643   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.062270   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:36.065909   10093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:36.070057    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:36.070057    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:36.098690    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:36.098690    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:38.663786    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:38.688639    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:38.718646    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.718646    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:38.721640    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:38.751651    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.751651    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:38.754647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:38.784327    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.784327    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:38.788327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:38.815337    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.815337    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:38.818328    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:38.846331    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.846331    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:38.849339    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:38.880297    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.880297    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:38.884227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:38.917702    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.917702    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:38.920940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:38.964973    8452 logs.go:282] 0 containers: []
	W1216 06:22:38.964973    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:38.964973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:38.964973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:38.999971    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:38.999971    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:39.102927    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:39.094702   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.095790   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.097227   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.098352   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:39.099523   10258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:39.102927    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:39.102927    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:39.141934    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:39.141934    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:39.210081    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:39.210081    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:36.404625    4424 pod_ready.go:99] pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8qrgg" not found
	I1216 06:22:36.404625    4424 pod_ready.go:86] duration metric: took 10.0053735s for pod "coredns-66bc5c9577-8qrgg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:36.404625    4424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7zmc" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 06:22:38.415310    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:40.417680    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:41.775031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:41.798710    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:41.831778    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.831778    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:41.835461    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:41.866411    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.866411    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:41.871544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:41.902486    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.902486    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:41.905907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:41.932887    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.932887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:41.935886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:41.965890    8452 logs.go:282] 0 containers: []
	W1216 06:22:41.965890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:41.968887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:42.000893    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.000893    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:42.004876    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:42.043522    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.043591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:42.049149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:42.081678    8452 logs.go:282] 0 containers: []
	W1216 06:22:42.081678    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:42.081678    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:42.081678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:42.140208    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:42.140208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:42.198197    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:42.198197    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:42.241586    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:42.241586    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:42.350617    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:42.340462   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.341356   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.343942   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.346678   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:42.348067   10460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:42.350617    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:42.350617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:44.884303    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:44.902304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:44.933421    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.933421    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:44.938149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:44.974292    8452 logs.go:282] 0 containers: []
	W1216 06:22:44.974334    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:44.977512    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	W1216 06:22:42.418518    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:44.914304    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:45.010620    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.010620    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:45.013618    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:45.047628    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.047628    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:45.050627    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:45.089756    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.089850    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:45.096356    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:45.137323    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.137323    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:45.141322    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:45.169330    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.170335    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:45.173321    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:45.202336    8452 logs.go:282] 0 containers: []
	W1216 06:22:45.202336    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:45.202336    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:45.202336    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:45.227331    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:45.227331    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:45.275577    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:45.275630    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:45.335206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:45.335206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:45.372222    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:45.372222    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:45.471935    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:45.463678   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.464763   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.465688   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.466633   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:45.467315   10632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:47.976320    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:48.004505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:48.037430    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.037430    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:48.040437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:48.076428    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.076477    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:48.081194    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:48.118536    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.118536    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:48.124810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:48.153702    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.153702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:48.159558    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:48.187736    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.187736    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:48.192607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:48.225619    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.225619    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:48.229580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:48.260085    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.260085    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:48.263087    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:48.294313    8452 logs.go:282] 0 containers: []
	W1216 06:22:48.294376    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:48.294376    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:48.294425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:48.345094    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:48.345094    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:48.423576    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:48.423576    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:48.459577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:48.459577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:48.548441    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:48.540125   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.541276   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.542401   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.543222   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:48.544293   10795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:48.548441    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:48.548441    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1216 06:22:47.414818    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:49.417236    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:51.080561    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:51.104134    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:51.132144    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.132144    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:51.136151    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:51.163962    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.163962    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:51.169361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:51.198404    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.198404    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:51.201253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:51.229899    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.229899    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:51.232895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:51.261881    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.261881    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:51.264887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:51.295306    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.295306    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:51.298763    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:51.331779    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.331850    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:51.337211    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:51.367502    8452 logs.go:282] 0 containers: []
	W1216 06:22:51.367502    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:51.367502    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:51.367502    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:51.424226    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:51.424226    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:51.482475    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:51.482475    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:51.527426    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:51.527426    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:51.618444    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:51.608637   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.609657   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.611332   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.612563   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:51.613718   10963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:51.618444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:51.618444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.148108    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:54.167190    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:54.198456    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.198456    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:54.202605    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:54.236901    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.236901    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:54.240906    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:54.272541    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.272541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:54.277008    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:54.312764    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.312764    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:54.317359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:54.347564    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.347564    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:54.350557    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:54.377557    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.377557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:54.381564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:54.411585    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.411585    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:54.415565    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:54.447567    8452 logs.go:282] 0 containers: []
	W1216 06:22:54.447567    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:54.447567    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:54.447567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:22:54.483559    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:54.483559    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:54.589583    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:54.576583   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.577510   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.580979   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.582505   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:54.583721   11111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:54.589583    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:54.589583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:54.617283    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:54.617349    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:54.673906    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:54.673990    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 06:22:51.420194    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:53.916809    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	W1216 06:22:55.919718    4424 pod_ready.go:104] pod "coredns-66bc5c9577-w7zmc" is not "Ready", error: <nil>
	I1216 06:22:58.419688    4424 pod_ready.go:94] pod "coredns-66bc5c9577-w7zmc" is "Ready"
	I1216 06:22:58.419688    4424 pod_ready.go:86] duration metric: took 22.0147573s for pod "coredns-66bc5c9577-w7zmc" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.424677    4424 pod_ready.go:83] waiting for pod "etcd-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.432677    4424 pod_ready.go:94] pod "etcd-kubenet-030800" is "Ready"
	I1216 06:22:58.432677    4424 pod_ready.go:86] duration metric: took 7.9992ms for pod "etcd-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.435689    4424 pod_ready.go:83] waiting for pod "kube-apiserver-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.459477    4424 pod_ready.go:94] pod "kube-apiserver-kubenet-030800" is "Ready"
	I1216 06:22:58.459477    4424 pod_ready.go:86] duration metric: took 22.793ms for pod "kube-apiserver-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.463834    4424 pod_ready.go:83] waiting for pod "kube-controller-manager-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.611617    4424 pod_ready.go:94] pod "kube-controller-manager-kubenet-030800" is "Ready"
	I1216 06:22:58.611617    4424 pod_ready.go:86] duration metric: took 147.7381ms for pod "kube-controller-manager-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:58.811398    4424 pod_ready.go:83] waiting for pod "kube-proxy-5b9l9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.211755    4424 pod_ready.go:94] pod "kube-proxy-5b9l9" is "Ready"
	I1216 06:22:59.211755    4424 pod_ready.go:86] duration metric: took 400.3513ms for pod "kube-proxy-5b9l9" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.412761    4424 pod_ready.go:83] waiting for pod "kube-scheduler-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.811735    4424 pod_ready.go:94] pod "kube-scheduler-kubenet-030800" is "Ready"
	I1216 06:22:59.811813    4424 pod_ready.go:86] duration metric: took 399.0464ms for pod "kube-scheduler-kubenet-030800" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 06:22:59.811850    4424 pod_ready.go:40] duration metric: took 33.4202632s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 06:22:59.926671    4424 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 06:22:59.930035    4424 out.go:179] * Done! kubectl is now configured to use "kubenet-030800" cluster and "default" namespace by default
	I1216 06:22:57.250472    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:22:57.271468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:22:57.303800    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.303800    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:22:57.306801    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:22:57.338803    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.338803    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:22:57.341800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:22:57.369018    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.369018    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:22:57.372806    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:22:57.403510    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.403510    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:22:57.406808    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:22:57.440995    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.440995    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:22:57.444225    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:22:57.475612    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.475612    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:22:57.479607    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:22:57.509842    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.509842    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:22:57.513186    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:22:57.545981    8452 logs.go:282] 0 containers: []
	W1216 06:22:57.545981    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:22:57.545981    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:22:57.545981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:22:57.636635    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:22:57.627088   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628149   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.628944   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.631398   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:22:57.632605   11276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:22:57.636635    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:22:57.636635    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:22:57.662639    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:22:57.662639    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:22:57.720464    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:22:57.720464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:22:57.782460    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:22:57.782460    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.324364    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:00.344368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:00.375358    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.375358    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:00.378355    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:00.410368    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.410368    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:00.414359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:00.442364    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.442364    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:00.446359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:00.476371    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.476371    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:00.479359    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:00.508323    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.508323    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:00.512431    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:00.550611    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.550611    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:00.553606    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:00.586336    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.586336    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:00.590552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:00.624129    8452 logs.go:282] 0 containers: []
	W1216 06:23:00.624129    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:00.624129    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:00.624129    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:00.685547    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:00.685547    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:00.737417    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:00.737417    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:00.858025    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:00.848940   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.850087   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851043   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.851937   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:00.853894   11452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:00.858025    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:00.858025    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:00.886607    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:00.886607    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:03.463847    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:03.826614    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:03.881622    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.881622    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:03.887610    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:03.936557    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.937539    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:03.941562    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:03.979542    8452 logs.go:282] 0 containers: []
	W1216 06:23:03.979542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:03.983550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:04.020535    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.020535    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:04.025547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:04.064541    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.064541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:04.068548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:04.101538    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.101538    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:04.104544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:04.141752    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.141752    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:04.146757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:04.182755    8452 logs.go:282] 0 containers: []
	W1216 06:23:04.182755    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:04.182755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:04.182755    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:04.305758    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:04.305758    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:04.356425    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:04.356425    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:04.487429    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:04.472473   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.473695   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.474664   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.477104   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:04.478136   11628 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:04.487429    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:04.487429    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:04.526318    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:04.526362    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.087022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:07.110346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:07.137790    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.137790    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:07.141786    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:07.174601    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.174601    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:07.179419    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:07.211656    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.211656    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:07.216897    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:07.250459    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.250459    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:07.254048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:07.282207    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.282207    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:07.285851    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:07.313925    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.313925    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:07.317529    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:07.348851    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.348851    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:07.353083    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:07.381401    8452 logs.go:282] 0 containers: []
	W1216 06:23:07.381401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:07.381401    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:07.381401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:07.408641    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:07.408641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:07.450935    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:07.450935    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:07.512733    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:07.512733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:07.552522    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:07.552522    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:07.649624    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:07.639634   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.640833   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.642247   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.643320   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:07.644215   11807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.155054    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:10.178201    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:10.207068    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.207068    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:10.210473    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:10.239652    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.239652    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:10.242766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:10.274887    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.274887    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:10.278519    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:10.308294    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.308351    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:10.312209    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:10.342572    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.342572    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:10.346437    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:10.375569    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.375630    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:10.378861    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:10.405446    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.405446    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:10.410730    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:10.441244    8452 logs.go:282] 0 containers: []
	W1216 06:23:10.441244    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:10.441244    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:10.441244    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:10.502753    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:10.502753    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:10.540437    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:10.540437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:10.626853    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:10.617466   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.618251   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.621144   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.622521   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:10.623725   11959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:10.626853    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:10.626853    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:10.654987    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:10.655058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.213336    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:13.237358    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:13.266636    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.266721    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:13.270023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:13.297369    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.297434    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:13.300782    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:13.336039    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.336039    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:13.341919    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:13.370523    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.370523    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:13.374455    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:13.404606    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.404606    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:13.408542    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:13.437373    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.437431    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:13.441106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:13.470738    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.470738    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:13.474495    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:13.502203    8452 logs.go:282] 0 containers: []
	W1216 06:23:13.502262    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:13.502262    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:13.502293    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:13.552578    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:13.552578    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:13.617499    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:13.617499    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:13.660047    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:13.660047    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:13.747316    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:13.738079   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.738758   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.742443   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.743550   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:13.745058   12131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:13.747316    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:13.747316    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.284216    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:16.307907    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:16.344535    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.344535    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:16.347847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:16.379001    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.379021    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:16.382292    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:16.413093    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.413116    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:16.418012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:16.456763    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.456826    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:16.460621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:16.491671    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.491693    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:16.495352    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:16.527862    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.527862    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:16.534704    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:16.564194    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.564243    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:16.570369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:16.601444    8452 logs.go:282] 0 containers: []
	W1216 06:23:16.601444    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:16.601444    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:16.601444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:16.631785    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:16.631785    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:16.675190    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:16.675190    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:16.737700    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:16.737700    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:16.775092    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:16.775092    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:16.865026    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:16.854947   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.856518   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.857755   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.858611   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:16.862077   12297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.370669    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:19.393524    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:19.423405    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.423513    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:19.427307    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:19.459137    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.459238    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:19.462635    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:19.493542    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.493542    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:19.497334    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:19.526496    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.526496    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:19.529949    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:19.559120    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.559120    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:19.562460    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:19.591305    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.591305    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:19.595794    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:19.625200    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.626193    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:19.629187    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:19.657201    8452 logs.go:282] 0 containers: []
	W1216 06:23:19.657201    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:19.657270    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:19.657270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:19.722496    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:19.722496    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:19.761161    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:19.761161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:19.852755    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:19.842311   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.843616   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.845108   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.846511   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:19.848033   12441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:19.853756    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:19.853756    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:19.880330    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:19.881280    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.458668    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:22.483505    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:22.514647    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.514647    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:22.518193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:22.551494    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.551494    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:22.555268    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:22.586119    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.586119    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:22.590107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:22.621733    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.621733    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:22.624739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:22.651728    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.651728    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:22.655725    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:22.687826    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.687826    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:22.692217    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:22.727413    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.727413    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:22.731318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:22.769477    8452 logs.go:282] 0 containers: []
	W1216 06:23:22.769477    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:22.770462    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:22.770462    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:22.795455    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:22.795455    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:22.851473    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:22.851473    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:22.911454    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:22.912459    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:22.948112    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:22.948112    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:23.039238    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:23.027399   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.029988   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.032319   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.033197   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:23.034754   12638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:25.544174    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:25.571784    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:25.610368    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.610422    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:25.615377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:25.651080    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.651129    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:25.655234    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:25.695942    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.695942    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:25.700548    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:25.727743    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.727743    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:25.730739    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:25.765620    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.765650    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:25.769261    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:25.805072    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.805127    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:25.810318    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:25.840307    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.840307    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:25.844490    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:25.888279    8452 logs.go:282] 0 containers: []
	W1216 06:23:25.888279    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:25.888279    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:25.888279    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:25.964206    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:25.964206    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:26.003275    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:26.003275    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:26.111485    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:26.100110   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.101410   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.102485   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.103943   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:26.106459   12790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:26.111485    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:26.111485    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:26.146819    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:26.146819    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:28.694382    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:28.716947    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:28.753062    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.753062    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:28.756810    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:28.789692    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.789692    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:28.794681    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:28.823690    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.823690    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:28.827683    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:28.858686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.858686    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:28.861688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:28.891686    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.891686    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:28.894684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:28.923683    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.923683    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:28.926684    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:28.958314    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.958314    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:28.962325    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:28.991317    8452 logs.go:282] 0 containers: []
	W1216 06:23:28.991317    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:28.991317    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:28.991317    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:29.039348    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:29.039348    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:29.103117    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:29.103117    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:29.148003    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:29.148003    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:29.240448    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:29.231637   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.232903   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.233781   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.235305   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:29.236187   12968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:29.240448    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:29.240448    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:31.772923    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:31.796203    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:31.827485    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.827485    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:31.830572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:31.873718    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.873718    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:31.877445    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:31.926391    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.926391    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:31.929391    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:31.964572    8452 logs.go:282] 0 containers: []
	W1216 06:23:31.964572    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:31.968096    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:32.003776    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.003776    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:32.007175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:32.046322    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.046322    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:32.049283    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:32.077299    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.077299    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:32.080289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:32.114717    8452 logs.go:282] 0 containers: []
	W1216 06:23:32.114793    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:32.114793    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:32.114843    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:32.191987    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:32.191987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:32.237143    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:32.237143    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:32.331899    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:32.320682   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.321669   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.322765   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.323937   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:32.325077   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:32.331899    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:32.331899    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:32.362021    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:32.362021    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:34.918825    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:34.945647    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:34.976745    8452 logs.go:282] 0 containers: []
	W1216 06:23:34.976745    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:34.980636    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:35.012295    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.012295    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:35.015295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:35.047289    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.047289    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:35.050289    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:35.081492    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.081492    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:35.085580    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:35.121645    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.121645    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:35.126840    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:35.167976    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.167976    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:35.170966    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:35.201969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.201969    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:35.204969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:35.232969    8452 logs.go:282] 0 containers: []
	W1216 06:23:35.233980    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:35.233980    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:35.233980    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:35.292973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:35.292973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:35.327973    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:35.327973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:35.420114    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:35.408265   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.409288   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.411387   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.412305   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:35.414964   13289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:35.420114    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:35.420114    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:35.451148    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:35.451148    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:38.010056    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:38.035506    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:38.071853    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.071853    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:38.075564    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:38.106543    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.106543    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:38.109547    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:38.143669    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.143669    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:38.152737    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:38.191923    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.191923    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:38.195575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:38.225935    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.225935    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:38.228939    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:38.268550    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.268550    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:38.271759    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:38.304387    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.304421    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:38.307849    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:38.341968    8452 logs.go:282] 0 containers: []
	W1216 06:23:38.341968    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:38.341968    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:38.341968    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:38.404267    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:38.404267    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:38.443104    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:38.443104    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:38.551474    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:38.541253   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.542348   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.543312   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.544172   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:38.547260   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:38.551474    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:38.551474    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:38.582843    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:38.582869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.141896    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:41.185331    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:41.218961    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.219548    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:41.223789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:41.252376    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.252376    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:41.255368    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:41.285378    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.285378    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:41.288369    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:41.318383    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.318383    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:41.321372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:41.349373    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.349373    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:41.353377    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:41.390105    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.390105    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:41.393103    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:41.425109    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.425109    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:41.428107    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:41.462594    8452 logs.go:282] 0 containers: []
	W1216 06:23:41.462594    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:41.462594    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:41.462594    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:41.492096    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:41.492156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:41.553755    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:41.553806    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:41.622329    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:41.622329    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:41.664016    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:41.664016    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:41.759009    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:41.747610   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.750631   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.751980   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.753708   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:41.754667   13659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:44.265223    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:44.286309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:44.319583    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.319583    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:44.324575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:44.358046    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.358114    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:44.361895    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:44.390541    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.390541    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:44.395354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:44.433163    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.433163    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:44.436754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:44.470605    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.470605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:44.475856    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:44.504412    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.504484    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:44.508013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:44.540170    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.540170    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:44.545802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:44.574593    8452 logs.go:282] 0 containers: []
	W1216 06:23:44.575118    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:44.575181    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:44.575181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:44.609181    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:44.609231    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:44.663988    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:44.663988    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:44.737678    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:44.737678    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:44.777530    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:44.777530    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:44.868751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:44.859104   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.860679   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.862144   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864136   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:44.864851   13826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:47.373432    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:47.674375    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:47.705067    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.705067    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:47.709193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:47.739921    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.739921    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:47.743656    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:47.771970    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.771970    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:47.776451    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:47.808633    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.808633    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:47.813124    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:47.856079    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.856079    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:47.859452    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:47.891897    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.891897    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:47.895769    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:47.926050    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.926050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:47.929679    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:47.962571    8452 logs.go:282] 0 containers: []
	W1216 06:23:47.962571    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:47.962571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:47.962571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:48.026367    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:48.026367    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:48.063580    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:48.063580    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:48.173751    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:48.158431   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.159479   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.165158   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.166391   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:48.167320   13971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:48.173792    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:48.173792    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:48.199403    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:48.199403    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:50.750699    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:50.774573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:50.804983    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.804983    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:50.808894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:50.838533    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.838533    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:50.842242    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:50.873377    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.873377    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:50.877508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:50.907646    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.907646    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:50.912410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:50.943853    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.943853    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:50.950275    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:50.977570    8452 logs.go:282] 0 containers: []
	W1216 06:23:50.977570    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:50.982575    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:51.010211    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.010211    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:51.014545    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:51.048584    8452 logs.go:282] 0 containers: []
	W1216 06:23:51.048584    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:51.048584    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:51.048584    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:51.112725    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:51.112725    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:51.150854    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:51.150854    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:51.246494    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:51.234086   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.234782   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.237119   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.238483   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:51.239545   14131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:51.246535    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:51.246535    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:51.274873    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:51.274873    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:53.832981    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:53.857995    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:53.892159    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.892159    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:53.895775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:53.926160    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.926160    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:53.929408    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:53.956482    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.956552    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:53.959711    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:53.989788    8452 logs.go:282] 0 containers: []
	W1216 06:23:53.989788    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:53.993230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:54.022506    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.022506    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:54.025409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:54.054974    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.054974    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:54.059372    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:54.088015    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.088015    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:54.092123    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:54.121961    8452 logs.go:282] 0 containers: []
	W1216 06:23:54.121961    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:54.121961    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:54.121961    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:54.169232    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:54.169295    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:54.230158    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:54.231156    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:54.267713    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:54.267713    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:54.368006    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:54.354773   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.355541   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.357931   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.358690   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:54.363501   14305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:54.368006    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:54.368006    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:56.899723    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:23:56.923149    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:23:56.957635    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.957635    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:23:56.961499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:23:56.988363    8452 logs.go:282] 0 containers: []
	W1216 06:23:56.988363    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:23:56.992371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:23:57.021993    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.021993    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:23:57.025544    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:23:57.055718    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.055718    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:23:57.060969    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:23:57.092456    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.092523    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:23:57.096418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:23:57.125588    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.125588    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:23:57.129665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:23:57.160663    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.160663    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:23:57.164518    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:23:57.196231    8452 logs.go:282] 0 containers: []
	W1216 06:23:57.196281    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:23:57.196281    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:23:57.196281    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:23:57.258973    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:23:57.258973    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:23:57.302939    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:23:57.302939    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:23:57.397577    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:23:57.385877   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387022   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.387942   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.390178   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:23:57.391208   14460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:23:57.397577    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:23:57.397577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:23:57.434801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:23:57.434801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:23:59.991022    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:00.014170    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:00.046529    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.046529    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:00.050903    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:00.080796    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.080796    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:00.084418    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:00.114858    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.114858    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:00.121404    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:00.152596    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.152596    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:00.156447    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:00.183532    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.183648    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:00.187074    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:00.218971    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.218971    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:00.222929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:00.252086    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.252086    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:00.256309    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:00.285884    8452 logs.go:282] 0 containers: []
	W1216 06:24:00.285884    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:00.285884    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:00.285884    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:00.364208    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:00.364208    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:00.403464    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:00.403464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:00.495864    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:00.486283   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.488489   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.489710   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.490954   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:00.491919   14630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:00.495864    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:00.495864    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:00.521592    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:00.521592    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:03.070724    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:03.093858    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:03.127112    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.127112    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:03.131265    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:03.161262    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.161262    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:03.165073    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:03.195882    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.195933    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:03.200488    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:03.230205    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.230205    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:03.234193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:03.263580    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.263629    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:03.267410    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:03.297599    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.297652    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:03.300957    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:03.329666    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.329720    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:03.333378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:03.365184    8452 logs.go:282] 0 containers: []
	W1216 06:24:03.365236    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:03.365282    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:03.365282    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:03.428385    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:03.428385    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:03.465984    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:03.465984    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:03.557873    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:03.548835   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.549579   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.552309   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.553609   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:03.554083   14789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:03.559101    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:03.559101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:03.586791    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:03.586791    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:06.142562    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:06.170227    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:06.202672    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.202672    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:06.206691    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:06.237624    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.237624    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:06.241559    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:06.267616    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.267616    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:06.271709    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:06.304567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.304567    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:06.308556    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:06.337567    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.337567    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:06.344744    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:06.373520    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.373520    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:06.377184    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:06.411936    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.411936    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:06.415789    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:06.447263    8452 logs.go:282] 0 containers: []
	W1216 06:24:06.447263    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:06.447263    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:06.447263    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:06.509097    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:06.509097    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:06.546188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:06.546188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:06.639923    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:06.628839   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.630167   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.633634   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.635056   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:06.636158   14952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:06.639923    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:06.639923    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:06.666485    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:06.666519    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.221249    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:09.244788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:09.276490    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.276490    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:09.280706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:09.309520    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.309520    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:09.313105    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:09.339092    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.339092    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:09.343484    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:09.369046    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.369046    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:09.373188    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:09.403810    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.403810    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:09.407108    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:09.437156    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.437156    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:09.441754    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:09.469752    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.469810    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:09.473378    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:09.503754    8452 logs.go:282] 0 containers: []
	W1216 06:24:09.503754    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:09.503754    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:09.503754    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:09.533645    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:09.533718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:09.587529    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:09.587529    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:09.647801    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:09.647801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:09.686577    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:09.686577    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:09.782674    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:09.774773   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.775807   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.776786   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.777919   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:09.779100   15125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.288199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:12.313967    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:12.344043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.344043    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:12.348347    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:12.378683    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.378683    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:12.382106    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:12.411599    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.411599    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:12.415131    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:12.445826    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.445873    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:12.450940    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:12.481043    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.481078    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:12.484800    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:12.512969    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.512990    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:12.515915    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:12.548151    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.548228    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:12.551706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:12.584039    8452 logs.go:282] 0 containers: []
	W1216 06:24:12.584039    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:12.584039    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:12.584039    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:12.646680    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:12.646680    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:12.686545    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:12.686545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:12.804767    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:12.797121   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.798130   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.799302   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.800614   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:12.801994   15270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:12.804767    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:12.804767    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:12.831866    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:12.831866    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:15.392415    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:15.416435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:15.445044    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.445044    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:15.449260    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:15.476688    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.476688    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:15.481012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:15.508866    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.508928    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:15.512662    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:15.541002    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.541002    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:15.545363    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:15.574947    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.574991    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:15.578407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:15.604751    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.604751    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:15.608699    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:15.639261    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.639338    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:15.642317    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:15.674404    8452 logs.go:282] 0 containers: []
	W1216 06:24:15.674404    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:15.674404    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:15.674404    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:15.736218    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:15.736218    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:15.774188    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:15.774188    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:15.862546    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:15.855457   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.856885   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.857990   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.858999   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:15.860222   15431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:15.862546    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:15.862546    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:15.888115    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:15.888115    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.441031    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:18.465207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:18.495447    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.495481    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:18.498929    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:18.528412    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.528476    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:18.531543    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:18.560175    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.560175    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:18.563996    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:18.592824    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.592894    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:18.596175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:18.623746    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.623746    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:18.627099    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:18.652978    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.653013    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:18.656407    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:18.683637    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.683686    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:18.687125    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:18.716903    8452 logs.go:282] 0 containers: []
	W1216 06:24:18.716942    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:18.716964    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:18.716981    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:18.743123    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:18.743675    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:18.794891    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:18.794891    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:18.858345    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:18.858345    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:18.894242    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:18.894242    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:18.979844    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:18.967590   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.968483   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.972121   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.973316   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:18.974595   15609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:21.485585    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:21.510290    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:21.539823    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.539823    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:21.543159    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:21.575241    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.575241    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:21.579330    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:21.607389    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.607490    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:21.611023    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:21.642332    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.642332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:21.645973    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:21.671339    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.671390    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:21.675048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:21.704483    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.704483    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:21.708499    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:21.734944    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.735027    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:21.738688    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:21.768890    8452 logs.go:282] 0 containers: []
	W1216 06:24:21.768890    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:21.768987    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:21.768987    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:21.800297    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:21.800344    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:21.854571    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:21.854571    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:21.921230    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:21.921230    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:21.961787    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:21.961787    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:22.060842    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:22.049706   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.051052   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.052217   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.053565   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:22.054683   15774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.566957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:24.591909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:24.624010    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.624010    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:24.627550    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:24.657938    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.657938    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:24.661917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:24.688848    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.688848    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:24.692388    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:24.722130    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.722165    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:24.725802    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:24.754067    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.754134    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:24.757294    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:24.783522    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.783595    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:24.787022    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:24.818531    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.818531    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:24.822200    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:24.851316    8452 logs.go:282] 0 containers: []
	W1216 06:24:24.851371    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:24.851391    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:24.851391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:24.940030    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:24.930688   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932210   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.932749   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.935259   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:24.936481   15915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:24.941511    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:24.941511    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:24.967127    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:24.967127    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:25.018271    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:25.018358    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:25.077769    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:25.077769    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:27.621222    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:27.644179    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:27.675033    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.675033    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:27.678724    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:27.707945    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.707945    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:27.712443    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:27.740635    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.740635    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:27.744539    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:27.775332    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.775332    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:27.779621    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:27.807884    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.807884    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:27.812207    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:27.843877    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.843877    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:27.850126    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:27.878365    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.878365    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:27.883323    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:27.911733    8452 logs.go:282] 0 containers: []
	W1216 06:24:27.911733    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:27.911733    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:27.911733    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:27.975085    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:27.975085    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:28.011926    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:28.011926    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:28.117959    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:28.107836   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.108827   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.109966   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111036   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:28.111973   16084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:28.117959    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:28.117959    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:28.146135    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:28.146135    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:30.702904    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:30.732783    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:30.768726    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.768726    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:30.772432    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:30.804888    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.804888    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:30.809005    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:30.839403    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.839403    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:30.843668    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:30.874013    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.874013    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:30.878013    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:30.906934    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.906934    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:30.911178    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:30.936942    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.936942    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:30.940954    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:30.967843    8452 logs.go:282] 0 containers: []
	W1216 06:24:30.967843    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:30.973798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:31.000614    8452 logs.go:282] 0 containers: []
	W1216 06:24:31.000614    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:31.000614    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:31.000614    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:31.063545    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:31.063545    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:31.101704    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:31.101704    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:31.201356    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:31.191294   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.192220   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.193651   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.194994   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:31.196236   16259 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:31.201356    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:31.201356    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:31.229634    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:31.229634    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:33.780745    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:33.805148    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:33.836319    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.836319    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:33.840094    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:33.872138    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.872167    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:33.875487    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:33.908318    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.908318    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:33.912197    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:33.940179    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.940223    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:33.944152    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:33.974912    8452 logs.go:282] 0 containers: []
	W1216 06:24:33.974912    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:33.978728    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:34.004557    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.004557    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:34.008971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:34.037591    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.037591    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:34.041537    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:34.073153    8452 logs.go:282] 0 containers: []
	W1216 06:24:34.073153    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:34.073153    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:34.073153    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:34.139585    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:34.139585    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:34.177888    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:34.177888    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:34.273589    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:34.261720   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.263807   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265121   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.265849   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:34.269697   16432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:34.273589    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:34.273589    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:34.298805    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:34.298805    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:36.851957    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:36.889887    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:36.919682    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.919682    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:36.923468    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:36.953008    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.953073    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:36.957253    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:36.985770    8452 logs.go:282] 0 containers: []
	W1216 06:24:36.985770    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:36.989059    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:37.015702    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.015702    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:37.019508    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:37.046311    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.046351    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:37.050327    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:37.087936    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.087936    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:37.092175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:37.121271    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.121271    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:37.125767    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:37.153753    8452 logs.go:282] 0 containers: []
	W1216 06:24:37.153814    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:37.153814    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:37.153869    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:37.218058    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:37.218058    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:37.256162    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:37.257161    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:37.349292    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:37.338679   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.339807   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.342488   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344008   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:37.344707   16597 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:37.349292    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:37.349292    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:37.378861    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:37.379384    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:39.931797    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:39.956069    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:39.991154    8452 logs.go:282] 0 containers: []
	W1216 06:24:39.991154    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:39.994809    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:40.021488    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.021488    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:40.025604    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:40.055464    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.055464    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:40.059576    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:40.085410    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.086402    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:40.090048    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:40.120389    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.120389    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:40.125766    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:40.159925    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.159962    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:40.163912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:40.190820    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.190820    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:40.194350    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:40.223821    8452 logs.go:282] 0 containers: []
	W1216 06:24:40.223886    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:40.223886    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:40.223886    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:40.292033    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:40.292033    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:40.331274    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:40.331274    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:40.423708    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:40.414779   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.415670   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.417383   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.418406   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:40.419821   16761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:40.423708    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:40.423708    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:40.452101    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:40.452101    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.005925    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:43.029165    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:43.060601    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.060601    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:43.064304    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:43.092446    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.092446    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:43.096552    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:43.127295    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.127347    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:43.130913    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:43.159919    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.159986    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:43.163049    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:43.190310    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.190384    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:43.194093    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:43.223641    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.223641    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:43.227270    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:43.254592    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.254592    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:43.259912    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:43.293166    8452 logs.go:282] 0 containers: []
	W1216 06:24:43.293166    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:43.293166    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:43.293166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:43.328685    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:43.328685    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:43.412970    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:43.403637   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.404840   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.406300   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.407723   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:43.408970   16919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:43.413012    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:43.413042    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:43.444573    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:43.444573    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:43.501857    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:43.501857    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.068154    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:46.095291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:46.125740    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.125740    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:46.131016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:46.160926    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.160926    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:46.164909    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:46.192634    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.192634    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:46.196346    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:46.224203    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.224203    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:46.228650    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:46.255541    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.255541    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:46.259732    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:46.289377    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.289377    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:46.293566    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:46.321342    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.321342    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:46.325492    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:46.352311    8452 logs.go:282] 0 containers: []
	W1216 06:24:46.352342    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:46.352342    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:46.352382    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:46.416761    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:46.416761    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:46.469641    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:46.469641    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:46.580672    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:46.571720   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.572978   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.574336   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.575497   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:46.576934   17090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:46.581191    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:46.581229    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:46.608166    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:46.608166    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:49.162834    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:49.187402    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:49.219893    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.219893    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:49.223424    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:49.252338    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.252338    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:49.255900    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:49.286106    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.286131    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:49.289776    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:49.317141    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.317141    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:49.322761    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:49.353605    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.353605    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:49.357674    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:49.385747    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.385793    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:49.388757    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:49.417812    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.417812    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:49.421500    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:49.452746    8452 logs.go:282] 0 containers: []
	W1216 06:24:49.452797    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:49.452797    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:49.452797    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:49.516432    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:49.516432    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:49.553647    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:49.553647    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:49.647049    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:49.634019   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.635733   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.637439   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.639063   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:49.641787   17254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:49.647087    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:49.647087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:49.671889    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:49.671889    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:52.224199    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:52.248067    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:52.282412    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.282412    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:52.286308    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:52.315955    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.315955    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:52.319894    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:52.353188    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.353188    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:52.356528    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:52.387579    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.387579    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:52.392336    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:52.421909    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.421909    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:52.425890    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:52.458902    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.458902    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:52.462430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:52.498067    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.498140    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:52.501354    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:52.528125    8452 logs.go:282] 0 containers: []
	W1216 06:24:52.528125    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:52.528125    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:52.528125    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:52.593845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:52.593845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:52.632779    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:52.632779    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:52.732902    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:52.721650   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.722944   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.723751   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.725908   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:52.727170   17415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:52.732902    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:52.732902    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:52.762437    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:52.762437    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.328400    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:55.355014    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:55.387364    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.387364    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:55.391085    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:55.417341    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.417341    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:55.421141    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:55.450785    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.450785    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:55.454454    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:55.482484    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.482484    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:55.486100    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:55.513682    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.513682    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:55.517291    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:55.548548    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.548548    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:55.552971    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:55.583380    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.583380    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:55.587471    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:55.618619    8452 logs.go:282] 0 containers: []
	W1216 06:24:55.618619    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:55.618619    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:55.618686    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:55.646962    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:55.646962    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:55.695480    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:55.695480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:24:55.757470    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:55.757470    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:55.796071    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:55.796071    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:55.889833    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:55.877628   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.878745   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.880269   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.881391   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:55.882702   17602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.396122    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:24:58.423573    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:24:58.454757    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.454757    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:24:58.460430    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:24:58.490597    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.490597    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:24:58.493832    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:24:58.523149    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.523149    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:24:58.526960    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:24:58.558649    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.558649    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:24:58.562228    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:24:58.591400    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.591400    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:24:58.595569    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:24:58.624162    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.624162    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:24:58.628070    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:24:58.660578    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.660578    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:24:58.664236    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:24:58.693155    8452 logs.go:282] 0 containers: []
	W1216 06:24:58.693155    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:24:58.693155    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:24:58.693155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:24:58.732408    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:24:58.733409    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:24:58.823465    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:24:58.812767   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.814019   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.815130   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.816828   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:24:58.818278   17737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:24:58.823465    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:24:58.823465    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:24:58.848772    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:24:58.848772    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:24:58.900567    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:24:58.900567    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.465828    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:01.490385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:01.520316    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.520316    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:01.524299    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:01.555350    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.555350    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:01.559239    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:01.587077    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.587077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:01.591421    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:01.623853    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.623853    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:01.627746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:01.658165    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.658165    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:01.661588    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:01.703310    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.703310    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:01.709361    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:01.740903    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.740903    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:01.744287    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:01.773431    8452 logs.go:282] 0 containers: []
	W1216 06:25:01.773431    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:01.773431    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:01.773431    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:01.863541    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:01.853956   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.855113   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.856000   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.858627   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:01.859841   17900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:01.863541    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:01.863541    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:01.891816    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:01.891816    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:01.936351    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:01.936351    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:01.997563    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:01.997563    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.541470    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:04.565886    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:04.595881    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.595881    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:04.599716    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:04.629724    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.629749    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:04.633814    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:04.666020    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.666047    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:04.669510    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:04.699730    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.699730    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:04.704016    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:04.734540    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.734540    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:04.738414    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:04.765651    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.765651    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:04.769397    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:04.797315    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.797315    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:04.801409    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:04.832845    8452 logs.go:282] 0 containers: []
	W1216 06:25:04.832845    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:04.832845    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:04.832845    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:04.869617    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:04.869617    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:04.958334    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:04.947769   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.948641   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.950127   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.953617   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:04.954566   18078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:04.958334    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:04.958334    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:04.983497    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:04.983497    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:05.037861    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:05.037887    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.603239    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:07.626775    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:07.655146    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.655146    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:07.658648    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:07.688192    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.688227    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:07.691749    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:07.723836    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.723836    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:07.727536    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:07.761238    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.761238    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:07.764987    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:07.792890    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.792890    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:07.796847    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:07.824734    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.824734    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:07.828821    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:07.859399    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.859399    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:07.862780    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:07.893406    8452 logs.go:282] 0 containers: []
	W1216 06:25:07.893406    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:07.893457    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:07.893480    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:07.954656    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:07.954656    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:07.992200    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:07.993203    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:08.077979    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:08.068614   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.069601   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.072821   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.074198   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:08.075251   18251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:08.077979    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:08.077979    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:08.102718    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:08.102718    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:10.662101    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:10.688889    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:10.721934    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.721996    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:10.727012    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:10.760697    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.760746    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:10.763961    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:10.791222    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.791293    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:10.795121    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:10.826239    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.826317    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:10.829753    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:10.857355    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.857355    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:10.861145    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:10.903922    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.903922    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:10.907990    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:10.937216    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.937281    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:10.940707    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:10.969086    8452 logs.go:282] 0 containers: []
	W1216 06:25:10.969086    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:10.969086    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:10.969238    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:11.062109    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:11.051521   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.052462   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.056878   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.058033   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:11.059089   18403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:11.062109    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:11.062109    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:11.090185    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:11.090185    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:11.141444    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:11.141444    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:11.199181    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:11.199181    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:13.741347    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:13.766441    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:13.800424    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.800424    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:13.805169    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:13.835040    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.835040    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:13.839295    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:13.864861    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.866077    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:13.869598    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:13.898887    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.898887    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:13.903167    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:13.931208    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.931208    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:13.936649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:13.963722    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.963722    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:13.967474    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:13.998640    8452 logs.go:282] 0 containers: []
	W1216 06:25:13.998640    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:14.002572    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:14.031349    8452 logs.go:282] 0 containers: []
	W1216 06:25:14.031401    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:14.031401    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:14.031401    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:14.124587    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:14.114187   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.115232   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.117492   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.120421   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:14.121924   18560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:14.124587    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:14.124714    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:14.153583    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:14.153583    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:14.202636    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:14.202636    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:14.260591    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:14.260591    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:16.808603    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:16.833787    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:16.864300    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.864300    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:16.868592    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:16.897549    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.897549    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:16.900917    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:16.931516    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.931557    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:16.936698    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:16.965053    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.965053    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:16.969015    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:16.997017    8452 logs.go:282] 0 containers: []
	W1216 06:25:16.997017    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:17.000551    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:17.028733    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.028733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:17.032830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:17.062242    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.062242    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:17.066193    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:17.096111    8452 logs.go:282] 0 containers: []
	W1216 06:25:17.096186    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:17.096186    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:17.096243    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:17.126801    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:17.126801    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:17.178392    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:17.178392    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:17.239223    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:17.239223    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:17.276363    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:17.277364    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:17.362910    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:17.350082   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.351537   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.353217   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356242   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:17.356652   18746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:19.869062    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:19.894371    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:19.924915    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.924915    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:19.929351    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:19.956535    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.956535    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:19.960534    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:19.989334    8452 logs.go:282] 0 containers: []
	W1216 06:25:19.989334    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:19.993202    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:20.021108    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.021108    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:20.025230    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:20.054251    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.054251    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:20.057788    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:20.088787    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.088860    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:20.092250    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:20.120577    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.120577    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:20.123857    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:20.153015    8452 logs.go:282] 0 containers: []
	W1216 06:25:20.153015    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:20.153015    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:20.153015    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:20.241391    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:20.228683   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230149   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.230993   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.233437   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:20.234532   18887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:20.241391    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:20.241391    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:20.267492    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:20.267554    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:20.321240    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:20.321880    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:20.384978    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:20.384978    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:22.926087    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:22.949774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:22.982854    8452 logs.go:282] 0 containers: []
	W1216 06:25:22.982854    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:22.986923    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:23.017638    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.017638    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:23.021130    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:23.052442    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.052667    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:23.058175    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:23.085210    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.085210    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:23.089664    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:23.120747    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.120795    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:23.124581    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:23.150600    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.150600    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:23.154602    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:23.182147    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.182147    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:23.185649    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:23.217087    8452 logs.go:282] 0 containers: []
	W1216 06:25:23.217087    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:23.217087    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:23.217087    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:23.280619    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:23.280619    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:23.318090    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:23.318090    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:23.406270    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:23.394203   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.395270   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.396259   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.397372   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:23.399435   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:23.406270    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:23.406270    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:23.435128    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:23.435128    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:25.989934    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:26.012706    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:26.043141    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.043141    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:26.047435    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:26.075985    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.075985    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:26.079830    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:26.110575    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.110575    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:26.113774    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:26.144668    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.144668    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:26.148428    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:26.175392    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.175392    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:26.179120    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:26.211067    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.211067    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:26.215072    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:26.243555    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.243586    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:26.246934    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:26.279876    8452 logs.go:282] 0 containers: []
	W1216 06:25:26.279876    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:26.279876    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:26.279876    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:26.387447    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:26.373284   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.375793   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.378641   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.380138   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:26.381292   19213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:26.387488    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:26.387537    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:26.413896    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:26.413896    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:26.462318    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:26.462318    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:26.527832    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:26.527832    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.072565    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:29.096390    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:29.127989    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.127989    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:29.131385    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:29.158741    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.158741    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:29.162538    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:29.190346    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.190346    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:29.193798    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:29.222234    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.222234    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:29.225740    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:29.252553    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.252553    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:29.256489    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:29.285679    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.285733    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:29.289742    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:29.320841    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.321050    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:29.324841    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:29.352461    8452 logs.go:282] 0 containers: []
	W1216 06:25:29.352587    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:29.352615    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:29.352615    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:29.419045    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:29.419045    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:29.457659    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:29.457659    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:29.544155    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:29.532538   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.533394   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.535272   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.536253   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:29.537671   19378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:29.544155    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:29.544155    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:29.571612    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:29.571646    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:32.139910    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:32.164438    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1216 06:25:32.196526    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.196526    8452 logs.go:284] No container was found matching "kube-apiserver"
	I1216 06:25:32.200231    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1216 06:25:32.226279    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.226279    8452 logs.go:284] No container was found matching "etcd"
	I1216 06:25:32.230146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1216 06:25:32.257831    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.257831    8452 logs.go:284] No container was found matching "coredns"
	I1216 06:25:32.262665    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1216 06:25:32.293641    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.293641    8452 logs.go:284] No container was found matching "kube-scheduler"
	I1216 06:25:32.297746    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1216 06:25:32.327055    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.327055    8452 logs.go:284] No container was found matching "kube-proxy"
	I1216 06:25:32.331274    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1216 06:25:32.362206    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.362206    8452 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 06:25:32.365146    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1216 06:25:32.394600    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.394600    8452 logs.go:284] No container was found matching "kindnet"
	I1216 06:25:32.400058    8452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1216 06:25:32.428075    8452 logs.go:282] 0 containers: []
	W1216 06:25:32.428075    8452 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 06:25:32.428075    8452 logs.go:123] Gathering logs for kubelet ...
	I1216 06:25:32.428075    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 06:25:32.491661    8452 logs.go:123] Gathering logs for dmesg ...
	I1216 06:25:32.491661    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 06:25:32.528847    8452 logs.go:123] Gathering logs for describe nodes ...
	I1216 06:25:32.528847    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 06:25:32.616464    8452 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1216 06:25:32.604734   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.606447   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.608434   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.609518   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:25:32.610901   19547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 06:25:32.616464    8452 logs.go:123] Gathering logs for Docker ...
	I1216 06:25:32.616464    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1216 06:25:32.642397    8452 logs.go:123] Gathering logs for container status ...
	I1216 06:25:32.642397    8452 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 06:25:35.191852    8452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 06:25:35.225285    8452 out.go:203] 
	W1216 06:25:35.227244    8452 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1216 06:25:35.227244    8452 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1216 06:25:35.227244    8452 out.go:285] * Related issues:
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1216 06:25:35.227244    8452 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1216 06:25:35.230096    8452 out.go:203] 
	
	
	==> Docker <==
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570336952Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570433565Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570447467Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570465470Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570473171Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570498774Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.570539380Z" level=info msg="Initializing buildkit"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.671982027Z" level=info msg="Completed buildkit initialization"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680146533Z" level=info msg="Daemon has completed initialization"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680337859Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680374664Z" level=info msg="API listen on /run/docker.sock"
	Dec 16 06:16:00 no-preload-686300 dockerd[929]: time="2025-12-16T06:16:00.680404268Z" level=info msg="API listen on [::]:2376"
	Dec 16 06:16:00 no-preload-686300 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 16 06:16:01 no-preload-686300 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Start docker client with request timeout 0s"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Loaded network plugin cni"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 16 06:16:01 no-preload-686300 cri-dockerd[1225]: time="2025-12-16T06:16:01Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 16 06:16:01 no-preload-686300 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1216 06:34:54.972712   21326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:34:54.974268   21326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:34:54.975225   21326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:34:54.976409   21326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1216 06:34:54.977566   21326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.633501] CPU: 10 PID: 466820 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f865800db20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f865800daf6.
	[  +0.000001] RSP: 002b:00007ffc8c624780 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000033] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.839091] CPU: 12 PID: 466960 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fa6af131b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fa6af131af6.
	[  +0.000001] RSP: 002b:00007ffe97387e50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec16 06:22] tmpfs: Unknown parameter 'noswap'
	[  +9.428310] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 06:34:55 up  2:11,  0 user,  load average: 0.22, 0.79, 2.28
	Linux no-preload-686300 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 16 06:34:51 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:34:52 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1505.
	Dec 16 06:34:52 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:34:52 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:34:52 no-preload-686300 kubelet[21135]: E1216 06:34:52.547525   21135 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:34:52 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:34:52 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:34:53 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1506.
	Dec 16 06:34:53 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:34:53 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:34:53 no-preload-686300 kubelet[21162]: E1216 06:34:53.321278   21162 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:34:53 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:34:53 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:34:53 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1507.
	Dec 16 06:34:53 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:34:53 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:34:54 no-preload-686300 kubelet[21192]: E1216 06:34:54.058857   21192 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:34:54 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:34:54 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 06:34:54 no-preload-686300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1508.
	Dec 16 06:34:54 no-preload-686300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:34:54 no-preload-686300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 16 06:34:54 no-preload-686300 kubelet[21271]: E1216 06:34:54.797031   21271 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 16 06:34:54 no-preload-686300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 16 06:34:54 no-preload-686300 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-686300 -n no-preload-686300: exit status 2 (601.4266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-686300" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (225.28s)

                                                
                                    

Test pass (358/427)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 21.04
4 TestDownloadOnly/v1.28.0/preload-exists 0.05
7 TestDownloadOnly/v1.28.0/kubectl 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.28
9 TestDownloadOnly/v1.28.0/DeleteAll 0.85
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.54
12 TestDownloadOnly/v1.34.2/json-events 16.04
13 TestDownloadOnly/v1.34.2/preload-exists 0
16 TestDownloadOnly/v1.34.2/kubectl 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.23
18 TestDownloadOnly/v1.34.2/DeleteAll 0.97
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.49
21 TestDownloadOnly/v1.35.0-beta.0/json-events 18.51
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.24
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.84
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.48
29 TestDownloadOnlyKic 1.49
30 TestBinaryMirror 2.45
31 TestOffline 126.14
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 292.54
38 TestAddons/serial/Volcano 51.35
40 TestAddons/serial/GCPAuth/Namespaces 0.25
41 TestAddons/serial/GCPAuth/FakeCredentials 9.11
45 TestAddons/parallel/RegistryCreds 1.35
47 TestAddons/parallel/InspektorGadget 12.82
48 TestAddons/parallel/MetricsServer 7.11
50 TestAddons/parallel/CSI 67.71
51 TestAddons/parallel/Headlamp 29.88
52 TestAddons/parallel/CloudSpanner 6.91
53 TestAddons/parallel/LocalPath 57.06
54 TestAddons/parallel/NvidiaDevicePlugin 6.33
55 TestAddons/parallel/Yakd 12.78
56 TestAddons/parallel/AmdGpuDevicePlugin 6.54
57 TestAddons/StoppedEnableDisable 12.8
58 TestCertOptions 60.6
59 TestCertExpiration 283.25
60 TestDockerFlags 54.08
61 TestForceSystemdFlag 58.31
62 TestForceSystemdEnv 77.94
68 TestErrorSpam/start 2.57
69 TestErrorSpam/status 2.14
70 TestErrorSpam/pause 2.64
71 TestErrorSpam/unpause 2.62
72 TestErrorSpam/stop 18.72
75 TestFunctional/serial/CopySyncFile 0.04
76 TestFunctional/serial/StartWithProxy 86.56
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 50.3
79 TestFunctional/serial/KubeContext 0.09
80 TestFunctional/serial/KubectlGetPods 0.27
83 TestFunctional/serial/CacheCmd/cache/add_remote 10.23
84 TestFunctional/serial/CacheCmd/cache/add_local 4.03
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
86 TestFunctional/serial/CacheCmd/cache/list 0.21
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.6
88 TestFunctional/serial/CacheCmd/cache/cache_reload 4.51
89 TestFunctional/serial/CacheCmd/cache/delete 0.38
90 TestFunctional/serial/MinikubeKubectlCmd 0.39
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.83
92 TestFunctional/serial/ExtraConfig 49.67
93 TestFunctional/serial/ComponentHealth 0.14
94 TestFunctional/serial/LogsCmd 1.75
95 TestFunctional/serial/LogsFileCmd 1.84
96 TestFunctional/serial/InvalidService 5.64
98 TestFunctional/parallel/ConfigCmd 1.23
100 TestFunctional/parallel/DryRun 1.55
101 TestFunctional/parallel/InternationalLanguage 0.76
102 TestFunctional/parallel/StatusCmd 2.06
107 TestFunctional/parallel/AddonsCmd 0.43
108 TestFunctional/parallel/PersistentVolumeClaim 23.42
110 TestFunctional/parallel/SSHCmd 1.19
111 TestFunctional/parallel/CpCmd 3.39
112 TestFunctional/parallel/MySQL 93.06
113 TestFunctional/parallel/FileSync 0.54
114 TestFunctional/parallel/CertSync 3.23
118 TestFunctional/parallel/NodeLabels 0.14
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
122 TestFunctional/parallel/License 1.41
123 TestFunctional/parallel/DockerEnv/powershell 10.58
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.39
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.3
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.33
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.79
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.38
132 TestFunctional/parallel/ServiceCmd/DeployApp 8.34
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.98
140 TestFunctional/parallel/ProfileCmd/profile_list 0.86
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.91
142 TestFunctional/parallel/ServiceCmd/List 0.85
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.86
144 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
145 TestFunctional/parallel/Version/short 0.19
146 TestFunctional/parallel/Version/components 0.84
147 TestFunctional/parallel/ImageCommands/ImageListShort 0.44
148 TestFunctional/parallel/ImageCommands/ImageListTable 0.45
149 TestFunctional/parallel/ImageCommands/ImageListJson 0.47
150 TestFunctional/parallel/ImageCommands/ImageListYaml 0.44
151 TestFunctional/parallel/ImageCommands/ImageBuild 12.51
152 TestFunctional/parallel/ImageCommands/Setup 1.79
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.25
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.82
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.55
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.64
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.88
158 TestFunctional/parallel/ServiceCmd/Format 15.04
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.07
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.87
161 TestFunctional/parallel/ServiceCmd/URL 15.01
162 TestFunctional/delete_echo-server_images 0.14
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.06
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0.01
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.1
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 9.68
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 3.68
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.17
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.18
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.56
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 4.47
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.38
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.3
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.32
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 1.18
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 1.49
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.61
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.42
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 1.08
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 3.32
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.6
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 3.44
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.55
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 2.73
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.3
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.32
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.32
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.15
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.86
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.46
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.47
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.46
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.45
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.86
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.9
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 3.52
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 2.95
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 3.62
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.64
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.95
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 1.18
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.82
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.84
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.8
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.79
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.14
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.06
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.06
260 TestMultiControlPlane/serial/StartCluster 217.8
261 TestMultiControlPlane/serial/DeployApp 9.54
262 TestMultiControlPlane/serial/PingHostFromPods 2.62
263 TestMultiControlPlane/serial/AddWorkerNode 55.15
264 TestMultiControlPlane/serial/NodeLabels 0.14
265 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.95
266 TestMultiControlPlane/serial/CopyFile 33.23
267 TestMultiControlPlane/serial/StopSecondaryNode 13.39
268 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.55
269 TestMultiControlPlane/serial/RestartSecondaryNode 103.52
270 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.97
271 TestMultiControlPlane/serial/RestartClusterKeepsNodes 168.41
272 TestMultiControlPlane/serial/DeleteSecondaryNode 14.65
273 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.46
274 TestMultiControlPlane/serial/StopCluster 37.27
275 TestMultiControlPlane/serial/RestartCluster 84.15
276 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.49
277 TestMultiControlPlane/serial/AddSecondaryNode 100.52
278 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.94
281 TestImageBuild/serial/Setup 49.18
282 TestImageBuild/serial/NormalBuild 4.64
283 TestImageBuild/serial/BuildWithBuildArg 2.16
284 TestImageBuild/serial/BuildWithDockerIgnore 1.33
285 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.29
290 TestJSONOutput/start/Command 81.63
291 TestJSONOutput/start/Audit 0
293 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/pause/Command 1.14
297 TestJSONOutput/pause/Audit 0
299 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/unpause/Command 0.94
303 TestJSONOutput/unpause/Audit 0
305 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
308 TestJSONOutput/stop/Command 12.18
309 TestJSONOutput/stop/Audit 0
311 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
312 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
313 TestErrorJSONOutput 0.65
315 TestKicCustomNetwork/create_custom_network 53.41
316 TestKicCustomNetwork/use_default_bridge_network 51.9
317 TestKicExistingNetwork 53.92
318 TestKicCustomSubnet 53.24
319 TestKicStaticIP 56.69
320 TestMainNoArgs 0.16
321 TestMinikubeProfile 100.84
324 TestMountStart/serial/StartWithMountFirst 13.83
325 TestMountStart/serial/VerifyMountFirst 0.57
326 TestMountStart/serial/StartWithMountSecond 13.37
327 TestMountStart/serial/VerifyMountSecond 0.54
328 TestMountStart/serial/DeleteFirst 2.43
329 TestMountStart/serial/VerifyMountPostDelete 0.51
330 TestMountStart/serial/Stop 1.87
331 TestMountStart/serial/RestartStopped 10.74
332 TestMountStart/serial/VerifyMountPostStop 0.51
335 TestMultiNode/serial/FreshStart2Nodes 129.9
336 TestMultiNode/serial/DeployApp2Nodes 7.7
337 TestMultiNode/serial/PingHostFrom2Pods 1.74
338 TestMultiNode/serial/AddNode 53.44
339 TestMultiNode/serial/MultiNodeLabels 0.13
340 TestMultiNode/serial/ProfileList 1.37
341 TestMultiNode/serial/CopyFile 18.93
342 TestMultiNode/serial/StopNode 3.66
343 TestMultiNode/serial/StartAfterStop 12.98
344 TestMultiNode/serial/RestartKeepsNodes 85.77
345 TestMultiNode/serial/DeleteNode 8.23
346 TestMultiNode/serial/StopMultiNode 24
347 TestMultiNode/serial/RestartMultiNode 57.04
348 TestMultiNode/serial/ValidateNameConflict 51.55
352 TestPreload 142.6
353 TestScheduledStopWindows 112.76
357 TestInsufficientStorage 28.99
358 TestRunningBinaryUpgrade 395.94
361 TestMissingContainerUpgrade 139.77
362 TestStoppedBinaryUpgrade/Setup 0.95
365 TestNoKubernetes/serial/StartNoK8sWithVersion 0.27
373 TestPause/serial/Start 120.86
374 TestNoKubernetes/serial/StartWithK8s 85.37
375 TestStoppedBinaryUpgrade/Upgrade 156.24
376 TestNoKubernetes/serial/StartWithStopK8s 26.06
377 TestNoKubernetes/serial/Start 15.12
378 TestPause/serial/SecondStartNoReconfiguration 47.34
379 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
380 TestNoKubernetes/serial/VerifyK8sNotRunning 0.59
381 TestNoKubernetes/serial/ProfileList 4.18
382 TestNoKubernetes/serial/Stop 2.76
383 TestNoKubernetes/serial/StartNoArgs 12.24
384 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.56
396 TestStoppedBinaryUpgrade/MinikubeLogs 2.63
397 TestPause/serial/Pause 1.55
398 TestPause/serial/VerifyStatus 0.66
399 TestPause/serial/Unpause 1.9
400 TestPause/serial/PauseAgain 1.76
401 TestPause/serial/DeletePaused 5.04
402 TestPause/serial/VerifyDeletedResources 4.98
404 TestStartStop/group/old-k8s-version/serial/FirstStart 66.36
405 TestStartStop/group/old-k8s-version/serial/DeployApp 9.67
406 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.78
407 TestStartStop/group/old-k8s-version/serial/Stop 12.26
408 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.52
409 TestStartStop/group/old-k8s-version/serial/SecondStart 56.84
412 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.05
413 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.33
414 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.48
415 TestStartStop/group/old-k8s-version/serial/Pause 5.24
417 TestStartStop/group/embed-certs/serial/FirstStart 86.77
419 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.01
420 TestStartStop/group/embed-certs/serial/DeployApp 11.57
421 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.56
422 TestStartStop/group/embed-certs/serial/Stop 12.26
423 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.55
424 TestStartStop/group/embed-certs/serial/SecondStart 49.35
425 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.65
426 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.69
427 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.31
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.61
429 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 63.92
430 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
431 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.34
432 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.49
433 TestStartStop/group/embed-certs/serial/Pause 5.17
436 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
437 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.28
438 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.46
439 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.83
440 TestNetworkPlugins/group/auto/Start 87.21
441 TestNetworkPlugins/group/auto/KubeletFlags 0.56
442 TestNetworkPlugins/group/auto/NetCatPod 15.49
443 TestNetworkPlugins/group/auto/DNS 0.23
444 TestNetworkPlugins/group/auto/Localhost 0.19
445 TestNetworkPlugins/group/auto/HairPin 0.19
446 TestNetworkPlugins/group/kindnet/Start 77.89
447 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
448 TestNetworkPlugins/group/kindnet/KubeletFlags 0.56
449 TestNetworkPlugins/group/kindnet/NetCatPod 15.54
450 TestNetworkPlugins/group/kindnet/DNS 0.24
451 TestNetworkPlugins/group/kindnet/Localhost 0.2
452 TestNetworkPlugins/group/kindnet/HairPin 0.22
455 TestNetworkPlugins/group/calico/Start 114.59
456 TestStartStop/group/no-preload/serial/Stop 1.88
457 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.57
459 TestNetworkPlugins/group/calico/ControllerPod 6.01
460 TestNetworkPlugins/group/calico/KubeletFlags 0.59
461 TestNetworkPlugins/group/calico/NetCatPod 15.55
462 TestNetworkPlugins/group/calico/DNS 0.27
463 TestNetworkPlugins/group/calico/Localhost 0.23
464 TestNetworkPlugins/group/calico/HairPin 0.2
465 TestNetworkPlugins/group/custom-flannel/Start 81.64
466 TestNetworkPlugins/group/false/Start 82.98
467 TestStartStop/group/newest-cni/serial/DeployApp 0
469 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.57
470 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.57
471 TestNetworkPlugins/group/custom-flannel/DNS 0.24
472 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
473 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
474 TestNetworkPlugins/group/false/KubeletFlags 0.53
475 TestNetworkPlugins/group/false/NetCatPod 15.52
476 TestNetworkPlugins/group/flannel/Start 78.69
477 TestNetworkPlugins/group/false/DNS 0.25
478 TestNetworkPlugins/group/false/Localhost 0.21
479 TestNetworkPlugins/group/false/HairPin 0.2
480 TestNetworkPlugins/group/enable-default-cni/Start 87.86
481 TestStartStop/group/newest-cni/serial/Stop 4.02
482 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.56
484 TestNetworkPlugins/group/flannel/ControllerPod 6.01
485 TestNetworkPlugins/group/flannel/KubeletFlags 0.55
486 TestNetworkPlugins/group/flannel/NetCatPod 15.4
487 TestNetworkPlugins/group/flannel/DNS 0.23
488 TestNetworkPlugins/group/flannel/Localhost 0.2
489 TestNetworkPlugins/group/flannel/HairPin 0.2
490 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.54
491 TestNetworkPlugins/group/enable-default-cni/NetCatPod 16.5
492 TestNetworkPlugins/group/bridge/Start 85.74
493 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
494 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
495 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
496 TestNetworkPlugins/group/kubenet/Start 89.04
498 TestNetworkPlugins/group/bridge/KubeletFlags 0.54
499 TestNetworkPlugins/group/bridge/NetCatPod 15.49
500 TestNetworkPlugins/group/bridge/DNS 0.25
501 TestNetworkPlugins/group/bridge/Localhost 0.21
502 TestNetworkPlugins/group/bridge/HairPin 0.24
503 TestNetworkPlugins/group/kubenet/KubeletFlags 0.57
504 TestNetworkPlugins/group/kubenet/NetCatPod 14.53
505 TestNetworkPlugins/group/kubenet/DNS 0.23
506 TestNetworkPlugins/group/kubenet/Localhost 0.2
507 TestNetworkPlugins/group/kubenet/HairPin 0.2
508 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
509 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
510 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.48
x
+
TestDownloadOnly/v1.28.0/json-events (21.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-666000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-666000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker: (21.0384546s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (21.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1216 04:26:23.788029   11704 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1216 04:26:23.839428   11704 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-666000
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-666000: exit status 85 (272.4426ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-666000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-666000 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:26:02
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:26:02.821207   10368 out.go:360] Setting OutFile to fd 676 ...
	I1216 04:26:02.863900   10368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:26:02.863967   10368 out.go:374] Setting ErrFile to fd 680...
	I1216 04:26:02.863967   10368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1216 04:26:02.873431   10368 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1216 04:26:02.881433   10368 out.go:368] Setting JSON to true
	I1216 04:26:02.883430   10368 start.go:133] hostinfo: {"hostname":"minikube4","uptime":184,"bootTime":1765858978,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 04:26:02.883430   10368 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 04:26:02.900865   10368 out.go:99] [download-only-666000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	W1216 04:26:02.901728   10368 preload.go:354] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1216 04:26:02.901728   10368 notify.go:221] Checking for updates...
	I1216 04:26:02.903757   10368 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:26:02.905802   10368 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 04:26:02.907786   10368 out.go:171] MINIKUBE_LOCATION=22141
	I1216 04:26:02.909115   10368 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1216 04:26:02.913681   10368 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:26:02.914474   10368 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:26:03.028432   10368 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 04:26:03.032111   10368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:26:03.758447   10368 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-16 04:26:03.737457503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:26:03.766137   10368 out.go:99] Using the docker driver based on user configuration
	I1216 04:26:03.766661   10368 start.go:309] selected driver: docker
	I1216 04:26:03.766691   10368 start.go:927] validating driver "docker" against <nil>
	I1216 04:26:03.771990   10368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:26:04.038319   10368 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-16 04:26:04.019735873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:26:04.038319   10368 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:26:04.088400   10368 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1216 04:26:04.089297   10368 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:26:04.093563   10368 out.go:171] Using Docker Desktop driver with root privileges
	I1216 04:26:04.094856   10368 cni.go:84] Creating CNI manager for ""
	I1216 04:26:04.096839   10368 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:26:04.096839   10368 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 04:26:04.096839   10368 start.go:353] cluster config:
	{Name:download-only-666000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:26:04.098494   10368 out.go:99] Starting "download-only-666000" primary control-plane node in "download-only-666000" cluster
	I1216 04:26:04.098494   10368 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 04:26:04.101255   10368 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:26:04.101811   10368 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1216 04:26:04.101811   10368 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:26:04.157890   10368 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 04:26:04.158837   10368 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765661130-22141@sha256_71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar
	I1216 04:26:04.159104   10368 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765661130-22141@sha256_71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar
	I1216 04:26:04.159170   10368 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1216 04:26:04.160345   10368 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 04:26:04.161610   10368 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1216 04:26:04.161610   10368 cache.go:65] Caching tarball of preloaded images
	I1216 04:26:04.161728   10368 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1216 04:26:04.165198   10368 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1216 04:26:04.165198   10368 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1216 04:26:04.264233   10368 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1216 04:26:04.264951   10368 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1216 04:26:12.281324   10368 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1216 04:26:21.855277   10368 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1216 04:26:21.855613   10368 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-666000\config.json ...
	I1216 04:26:21.855613   10368 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-666000\config.json: {Name:mk8560b92df2fb28584f0d9b9810a912dae94103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:21.884321   10368 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1216 04:26:21.886750   10368 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.28.0/kubectl.exe
	
	
	* The control-plane node download-only-666000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-666000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-666000
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (16.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-144100 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-144100 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker: (16.0374751s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (16.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1216 04:26:41.552012   11704 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1216 04:26:41.552117   11704 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
--- PASS: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-144100
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-144100: exit status 85 (227.6458ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-666000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-666000 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:26 UTC │
	│ delete  │ -p download-only-666000                                                                                                                           │ download-only-666000 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:26 UTC │
	│ start   │ -o=json --download-only -p download-only-144100 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker │ download-only-144100 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:26:25
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:26:25.585493    8904 out.go:360] Setting OutFile to fd 720 ...
	I1216 04:26:25.630517    8904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:26:25.630517    8904 out.go:374] Setting ErrFile to fd 632...
	I1216 04:26:25.630517    8904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:26:25.644489    8904 out.go:368] Setting JSON to true
	I1216 04:26:25.647490    8904 start.go:133] hostinfo: {"hostname":"minikube4","uptime":207,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 04:26:25.647490    8904 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 04:26:25.654489    8904 out.go:99] [download-only-144100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 04:26:25.654489    8904 notify.go:221] Checking for updates...
	I1216 04:26:25.658493    8904 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:26:25.661106    8904 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 04:26:25.666055    8904 out.go:171] MINIKUBE_LOCATION=22141
	I1216 04:26:25.669672    8904 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1216 04:26:25.673665    8904 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:26:25.674665    8904 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:26:25.792457    8904 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 04:26:25.796026    8904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:26:26.034065    8904 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-16 04:26:26.013878297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:26:26.249635    8904 out.go:99] Using the docker driver based on user configuration
	I1216 04:26:26.250262    8904 start.go:309] selected driver: docker
	I1216 04:26:26.250340    8904 start.go:927] validating driver "docker" against <nil>
	I1216 04:26:26.257210    8904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:26:26.492647    8904 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-16 04:26:26.474581964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:26:26.492647    8904 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:26:26.530780    8904 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1216 04:26:26.531379    8904 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:26:26.534140    8904 out.go:171] Using Docker Desktop driver with root privileges
	I1216 04:26:26.536696    8904 cni.go:84] Creating CNI manager for ""
	I1216 04:26:26.537044    8904 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:26:26.537044    8904 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 04:26:26.537044    8904 start.go:353] cluster config:
	{Name:download-only-144100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-144100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:26:26.538660    8904 out.go:99] Starting "download-only-144100" primary control-plane node in "download-only-144100" cluster
	I1216 04:26:26.538660    8904 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 04:26:26.541150    8904 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:26:26.541150    8904 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 04:26:26.541809    8904 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:26:26.597112    8904 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 04:26:26.597112    8904 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765661130-22141@sha256_71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar
	I1216 04:26:26.597112    8904 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765661130-22141@sha256_71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar
	I1216 04:26:26.597112    8904 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1216 04:26:26.597112    8904 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1216 04:26:26.597112    8904 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1216 04:26:26.597112    8904 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1216 04:26:26.600711    8904 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 04:26:26.600793    8904 cache.go:65] Caching tarball of preloaded images
	I1216 04:26:26.600793    8904 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 04:26:26.604256    8904 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1216 04:26:26.604314    8904 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1216 04:26:26.701508    8904 preload.go:295] Got checksum from GCS API "cafa99c47d4d00983a02f051962239e0"
	I1216 04:26:26.702120    8904 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4?checksum=md5:cafa99c47d4d00983a02f051962239e0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1216 04:26:39.954753    8904 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1216 04:26:39.955063    8904 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-144100\config.json ...
	I1216 04:26:39.955556    8904 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-144100\config.json: {Name:mk511fa82b0b6c133d59b5ae81c6e8bb20a47562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:39.955720    8904 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1216 04:26:39.956612    8904 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.34.2/kubectl.exe
	
	
	* The control-plane node download-only-144100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-144100"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-144100
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (18.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-994400 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-994400 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker: (18.5087111s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (18.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1216 04:27:01.749690   11704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
I1216 04:27:01.749690   11704 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-994400
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-994400: exit status 85 (230.9902ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                           │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-666000 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker        │ download-only-666000 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │                     │
	│ delete  │ --all                                                                                                                                                    │ minikube             │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:26 UTC │
	│ delete  │ -p download-only-666000                                                                                                                                  │ download-only-666000 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:26 UTC │
	│ start   │ -o=json --download-only -p download-only-144100 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker        │ download-only-144100 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │                     │
	│ delete  │ --all                                                                                                                                                    │ minikube             │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:26 UTC │
	│ delete  │ -p download-only-144100                                                                                                                                  │ download-only-144100 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:26 UTC │
	│ start   │ -o=json --download-only -p download-only-994400 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker │ download-only-994400 │ minikube4\jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:26:43
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:26:43.312654   10168 out.go:360] Setting OutFile to fd 660 ...
	I1216 04:26:43.353652   10168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:26:43.354655   10168 out.go:374] Setting ErrFile to fd 596...
	I1216 04:26:43.354655   10168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:26:43.367656   10168 out.go:368] Setting JSON to true
	I1216 04:26:43.370656   10168 start.go:133] hostinfo: {"hostname":"minikube4","uptime":225,"bootTime":1765858978,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 04:26:43.370656   10168 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 04:26:43.375656   10168 out.go:99] [download-only-994400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 04:26:43.375656   10168 notify.go:221] Checking for updates...
	I1216 04:26:43.377655   10168 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:26:43.379642   10168 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 04:26:43.382657   10168 out.go:171] MINIKUBE_LOCATION=22141
	I1216 04:26:43.384659   10168 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1216 04:26:43.389654   10168 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:26:43.390663   10168 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:26:43.501764   10168 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 04:26:43.504861   10168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:26:43.756899   10168 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-16 04:26:43.738218209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:26:43.844870   10168 out.go:99] Using the docker driver based on user configuration
	I1216 04:26:43.845289   10168 start.go:309] selected driver: docker
	I1216 04:26:43.845289   10168 start.go:927] validating driver "docker" against <nil>
	I1216 04:26:43.852365   10168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:26:44.101221   10168 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-12-16 04:26:44.07801789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:26:44.101221   10168 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:26:44.142849   10168 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1216 04:26:44.143551   10168 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:26:44.146095   10168 out.go:171] Using Docker Desktop driver with root privileges
	I1216 04:26:44.147901   10168 cni.go:84] Creating CNI manager for ""
	I1216 04:26:44.147901   10168 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1216 04:26:44.147901   10168 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 04:26:44.148543   10168 start.go:353] cluster config:
	{Name:download-only-994400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-994400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:26:44.151038   10168 out.go:99] Starting "download-only-994400" primary control-plane node in "download-only-994400" cluster
	I1216 04:26:44.151038   10168 cache.go:134] Beginning downloading kic base image for docker with docker
	I1216 04:26:44.154200   10168 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1216 04:26:44.154200   10168 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:26:44.154200   10168 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1216 04:26:44.214819   10168 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1216 04:26:44.214819   10168 cache.go:65] Caching tarball of preloaded images
	I1216 04:26:44.214819   10168 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1216 04:26:44.217817   10168 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1216 04:26:44.217817   10168 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1216 04:26:44.220822   10168 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1216 04:26:44.221815   10168 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765661130-22141@sha256_71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar
	I1216 04:26:44.221815   10168 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765661130-22141@sha256_71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar
	I1216 04:26:44.221815   10168 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1216 04:26:44.221815   10168 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1216 04:26:44.221815   10168 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1216 04:26:44.221815   10168 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1216 04:26:44.314807   10168 preload.go:295] Got checksum from GCS API "7f0e1a4aaa3540d32279d04bf9728fae"
	I1216 04:26:44.315431   10168 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:7f0e1a4aaa3540d32279d04bf9728fae -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-994400 host does not exist
	  To start a cluster, run: "minikube start -p download-only-994400"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-994400
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.48s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.49s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-380300 --alsologtostderr --driver=docker
helpers_test.go:176: Cleaning up "download-docker-380300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-380300
--- PASS: TestDownloadOnlyKic (1.49s)

                                                
                                    
x
+
TestBinaryMirror (2.45s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 04:27:06.251260   11704 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-264600 --alsologtostderr --binary-mirror http://127.0.0.1:64204 --driver=docker
aaa_download_only_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-264600 --alsologtostderr --binary-mirror http://127.0.0.1:64204 --driver=docker: (1.3675186s)
helpers_test.go:176: Cleaning up "binary-mirror-264600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-264600
--- PASS: TestBinaryMirror (2.45s)

                                                
                                    
x
+
TestOffline (126.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-205700 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-205700 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker: (2m1.9162881s)
helpers_test.go:176: Cleaning up "offline-docker-205700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-205700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-205700: (4.2274142s)
--- PASS: TestOffline (126.14s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-555000
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-555000: exit status 85 (207.3531ms)

                                                
                                                
-- stdout --
	* Profile "addons-555000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-555000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-555000
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-555000: exit status 85 (185.3013ms)

                                                
                                                
-- stdout --
	* Profile "addons-555000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-555000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (292.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-555000 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-555000 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m52.5350836s)
--- PASS: TestAddons/Setup (292.54s)

                                                
                                    
x
+
TestAddons/serial/Volcano (51.35s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 17.3886ms
addons_test.go:886: volcano-controller stabilized in 17.4916ms
addons_test.go:878: volcano-admission stabilized in 17.4916ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-h2js7" [2960baf1-1d91-4d2c-9b13-67861341b39f] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.007375s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-c67ct" [ae17d37d-31e2-45dd-b6fe-7671e4bc6072] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0105955s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-crsmm" [ba4c758a-ae66-49aa-ad68-97753137fec6] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0073322s
addons_test.go:905: (dbg) Run:  kubectl --context addons-555000 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-555000 create -f testdata\vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-555000 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [1e571446-33ee-4c70-a4d1-5f429c49d56e] Pending
helpers_test.go:353: "test-job-nginx-0" [1e571446-33ee-4c70-a4d1-5f429c49d56e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [1e571446-33ee-4c70-a4d1-5f429c49d56e] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 20.0069071s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable volcano --alsologtostderr -v=1: (12.4623673s)
--- PASS: TestAddons/serial/Volcano (51.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-555000 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-555000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-555000 create -f testdata\busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-555000 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [436ec47a-961b-4b4c-bd34-8ac0e8a3d910] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [436ec47a-961b-4b4c-bd34-8ac0e8a3d910] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.0060385s
addons_test.go:696: (dbg) Run:  kubectl --context addons-555000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-555000 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-555000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-555000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.11s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.35s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.5821ms
addons_test.go:327: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-555000
addons_test.go:334: (dbg) Run:  kubectl --context addons-555000 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.35s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-gmhzm" [60637ea2-13a2-4516-849b-c04e41bd3751] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.2848769s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable inspektor-gadget --alsologtostderr -v=1: (6.5345374s)
--- PASS: TestAddons/parallel/InspektorGadget (12.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.11s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.6105ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-rsttc" [550c3006-3a65-43d1-99e7-90f293e07f15] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.2891768s
addons_test.go:465: (dbg) Run:  kubectl --context addons-555000 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable metrics-server --alsologtostderr -v=1: (1.6352691s)
--- PASS: TestAddons/parallel/MetricsServer (7.11s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 04:33:18.678151   11704 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 04:33:18.827130   11704 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 04:33:18.827130   11704 kapi.go:107] duration metric: took 148.9791ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 148.9791ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-555000 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-555000 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [ec1bce34-e414-4ae6-aba4-a7344f002e47] Pending
helpers_test.go:353: "task-pv-pod" [ec1bce34-e414-4ae6-aba4-a7344f002e47] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [ec1bce34-e414-4ae6-aba4-a7344f002e47] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.0056174s
addons_test.go:574: (dbg) Run:  kubectl --context addons-555000 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-555000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-555000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-555000 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-555000 delete pod task-pv-pod: (1.9950387s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-555000 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-555000 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-555000 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [edc6c688-13c9-43dc-a8b7-1a0f2f4129d4] Pending
helpers_test.go:353: "task-pv-pod-restore" [edc6c688-13c9-43dc-a8b7-1a0f2f4129d4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [edc6c688-13c9-43dc-a8b7-1a0f2f4129d4] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0068216s
addons_test.go:616: (dbg) Run:  kubectl --context addons-555000 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-555000 delete pod task-pv-pod-restore: (1.1328671s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-555000 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-555000 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable volumesnapshots --alsologtostderr -v=1: (1.434653s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.409705s)
--- PASS: TestAddons/parallel/CSI (67.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (29.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-555000 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-555000 --alsologtostderr -v=1: (1.3912098s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-7q5ck" [721e6900-9e37-43b6-9ca4-5271bcde5fb2] Pending
helpers_test.go:353: "headlamp-dfcdc64b-7q5ck" [721e6900-9e37-43b6-9ca4-5271bcde5fb2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-7q5ck" [721e6900-9e37-43b6-9ca4-5271bcde5fb2] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.0060917s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable headlamp --alsologtostderr -v=1: (6.4787997s)
--- PASS: TestAddons/parallel/Headlamp (29.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-6rtwx" [7fecaa5d-58c0-45ce-a6af-89fbe29a0842] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0063517s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.91s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-555000 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-555000 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-555000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [47f0caf3-3f78-4ea3-91ff-4e7bcfa437f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [47f0caf3-3f78-4ea3-91ff-4e7bcfa437f9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [47f0caf3-3f78-4ea3-91ff-4e7bcfa437f9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0060014s
addons_test.go:969: (dbg) Run:  kubectl --context addons-555000 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 ssh "cat /opt/local-path-provisioner/pvc-df25ee93-f08e-4200-82ba-23614c7d12dd_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-555000 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-555000 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.3083436s)
--- PASS: TestAddons/parallel/LocalPath (57.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-k9hvj" [08aee569-fba2-4d63-8a9e-71c918faa8d3] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0079343s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.3201173s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.33s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-t99cv" [dd46fb43-370c-4719-bc5a-a7006c7e4b99] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0059007s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable yakd --alsologtostderr -v=1: (6.7717945s)
--- PASS: TestAddons/parallel/Yakd (12.78s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-sml7c" [fbf6bab1-50ce-4c43-80ae-c92b027f745e] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.0083132s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: (1.5248021s)
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.8s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-555000
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-555000: (11.9525868s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-555000
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-555000
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-555000
--- PASS: TestAddons/StoppedEnableDisable (12.80s)

                                                
                                    
x
+
TestCertOptions (60.6s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-935600 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
E1216 06:02:01.835581   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-935600 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (55.5067262s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-935600 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I1216 06:02:52.755913   11704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-935600
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-935600 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-935600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-935600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-935600: (3.8574366s)
--- PASS: TestCertOptions (60.60s)

                                                
                                    
x
+
TestCertExpiration (283.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-300900 --memory=3072 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-300900 --memory=3072 --cert-expiration=3m --driver=docker: (1m6.9918776s)
E1216 06:01:18.309178   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-300900 --memory=3072 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-300900 --memory=3072 --cert-expiration=8760h --driver=docker: (32.1234901s)
helpers_test.go:176: Cleaning up "cert-expiration-300900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-300900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-300900: (4.1376068s)
--- PASS: TestCertExpiration (283.25s)

                                                
                                    
x
+
TestDockerFlags (54.08s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-093500 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-093500 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (48.7638054s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-093500 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-093500 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-093500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-093500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-093500: (4.0750618s)
--- PASS: TestDockerFlags (54.08s)

                                                
                                    
x
+
TestForceSystemdFlag (58.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-688500 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-688500 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: (53.1104564s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-688500 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-688500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-688500
E1216 06:00:04.915879   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-688500: (4.4903568s)
--- PASS: TestForceSystemdFlag (58.31s)

                                                
                                    
x
+
TestForceSystemdEnv (77.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-570500 --memory=3072 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-570500 --memory=3072 --alsologtostderr -v=5 --driver=docker: (1m13.2577688s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-570500 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-570500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-570500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-570500: (4.0185099s)
--- PASS: TestForceSystemdEnv (77.94s)

                                                
                                    
x
+
TestErrorSpam/start (2.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 start --dry-run
--- PASS: TestErrorSpam/start (2.57s)

                                                
                                    
x
+
TestErrorSpam/status (2.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 status
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 status
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 status
--- PASS: TestErrorSpam/status (2.14s)

                                                
                                    
x
+
TestErrorSpam/pause (2.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 pause: (1.189394s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 pause
--- PASS: TestErrorSpam/pause (2.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 unpause
--- PASS: TestErrorSpam/unpause (2.62s)

                                                
                                    
x
+
TestErrorSpam/stop (18.72s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 stop: (11.9186529s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 stop: (3.0534733s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 stop
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-836400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-836400 stop: (3.7376328s)
--- PASS: TestErrorSpam/stop (18.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-902700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker
E1216 04:37:01.789818   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:01.797107   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:01.808492   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:01.830996   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:01.873008   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:01.955265   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:02.117561   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:02.439794   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:03.081648   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:04.364392   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:06.926510   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:12.049691   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:37:22.291432   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-902700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker: (1m26.5558954s)
--- PASS: TestFunctional/serial/StartWithProxy (86.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 04:37:42.024201   11704 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-902700 --alsologtostderr -v=8
E1216 04:37:42.774134   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:38:23.736851   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-902700 --alsologtostderr -v=8: (50.296167s)
functional_test.go:678: soft start took 50.2972146s for "functional-902700" cluster.
I1216 04:38:32.321247   11704 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (50.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-902700 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-902700 cache add registry.k8s.io/pause:3.1: (3.9278926s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-902700 cache add registry.k8s.io/pause:3.3: (3.1572166s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-902700 cache add registry.k8s.io/pause:latest: (3.1400205s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-902700 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3608295077\001
functional_test.go:1092: (dbg) Done: docker build -t minikube-local-cache-test:functional-902700 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3608295077\001: (1.1893925s)
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 cache add minikube-local-cache-test:functional-902700
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-902700 cache add minikube-local-cache-test:functional-902700: (2.5776551s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 cache delete minikube-local-cache-test:functional-902700
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-902700
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-902700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (569.8509ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-902700 cache reload: (2.7835849s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 kubectl -- --context functional-902700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-902700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.83s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-902700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-902700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.6682365s)
functional_test.go:776: restart took 49.668859s for "functional-902700" cluster.
I1216 04:39:44.708665   11704 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (49.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-902700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 logs
E1216 04:39:45.659676   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-902700 logs: (1.7484843s)
--- PASS: TestFunctional/serial/LogsCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3764159898\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-902700 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3764159898\001\logs.txt: (1.8260486s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.64s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-902700 apply -f testdata\invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-902700
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-902700: exit status 115 (1.0216029s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30217 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-902700 delete -f testdata\invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-902700 delete -f testdata\invalidsvc.yaml: (1.3025788s)
--- PASS: TestFunctional/serial/InvalidService (5.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-902700 config get cpus: exit status 14 (197.018ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-902700 config get cpus: exit status 14 (169.9948ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-902700 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-902700 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (650.9951ms)

                                                
                                                
-- stdout --
	* [functional-902700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:40:13.390891    8560 out.go:360] Setting OutFile to fd 1836 ...
	I1216 04:40:13.436874    8560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:40:13.436874    8560 out.go:374] Setting ErrFile to fd 1388...
	I1216 04:40:13.436874    8560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:40:13.449878    8560 out.go:368] Setting JSON to false
	I1216 04:40:13.452885    8560 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1035,"bootTime":1765858978,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 04:40:13.452885    8560 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 04:40:13.457890    8560 out.go:179] * [functional-902700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 04:40:13.460895    8560 notify.go:221] Checking for updates...
	I1216 04:40:13.462878    8560 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:40:13.464876    8560 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:40:13.466878    8560 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 04:40:13.468879    8560 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:40:13.470885    8560 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:40:13.473886    8560 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 04:40:13.474876    8560 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:40:13.609887    8560 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 04:40:13.613879    8560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:40:13.879881    8560 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:84 SystemTime:2025-12-16 04:40:13.85930675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:40:13.881876    8560 out.go:179] * Using the docker driver based on existing profile
	I1216 04:40:13.885874    8560 start.go:309] selected driver: docker
	I1216 04:40:13.885874    8560 start.go:927] validating driver "docker" against &{Name:functional-902700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-902700 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:40:13.885874    8560 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:40:13.928875    8560 out.go:203] 
	W1216 04:40:13.930874    8560 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 04:40:13.932875    8560 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-902700 --dry-run --alsologtostderr -v=1 --driver=docker
--- PASS: TestFunctional/parallel/DryRun (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-902700 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-902700 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (759.4542ms)

                                                
                                                
-- stdout --
	* [functional-902700] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:40:12.641421    7884 out.go:360] Setting OutFile to fd 1508 ...
	I1216 04:40:12.687431    7884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:40:12.687431    7884 out.go:374] Setting ErrFile to fd 2004...
	I1216 04:40:12.687431    7884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:40:12.702425    7884 out.go:368] Setting JSON to false
	I1216 04:40:12.705423    7884 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1034,"bootTime":1765858978,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 04:40:12.705423    7884 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 04:40:12.724695    7884 out.go:179] * [functional-902700] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 04:40:12.729957    7884 notify.go:221] Checking for updates...
	I1216 04:40:12.733516    7884 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 04:40:12.738106    7884 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:40:12.742877    7884 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 04:40:12.746910    7884 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:40:12.749474    7884 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:40:12.753044    7884 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 04:40:12.754842    7884 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:40:12.890875    7884 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 04:40:12.894875    7884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 04:40:13.164872    7884 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 04:40:13.14789128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 04:40:13.182882    7884 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 04:40:13.184872    7884 start.go:309] selected driver: docker
	I1216 04:40:13.184872    7884 start.go:927] validating driver "docker" against &{Name:functional-902700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-902700 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:40:13.184872    7884 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:40:13.274874    7884 out.go:203] 
	W1216 04:40:13.276885    7884 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 04:40:13.278876    7884 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 status
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [eb06ec1d-f7ea-4de3-bd10-1c2f75eecad4] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004885s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-902700 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-902700 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-902700 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-902700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [42e6b0e6-361a-4520-95bb-d79fea2946c6] Pending
helpers_test.go:353: "sp-pod" [42e6b0e6-361a-4520-95bb-d79fea2946c6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [42e6b0e6-361a-4520-95bb-d79fea2946c6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0057174s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-902700 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-902700 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-902700 delete -f testdata/storage-provisioner/pod.yaml: (1.7370745s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-902700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [35611882-8b9e-43d8-b11c-cd8f876ee37d] Pending
helpers_test.go:353: "sp-pod" [35611882-8b9e-43d8-b11c-cd8f876ee37d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [35611882-8b9e-43d8-b11c-cd8f876ee37d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0065659s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-902700 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh -n functional-902700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 cp functional-902700:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd1044941679\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh -n functional-902700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh -n functional-902700 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (93.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-902700 replace --force -f testdata\mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-lpsks" [3a6f209f-e289-4b89-af64-99f4069bf651] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-lpsks" [3a6f209f-e289-4b89-af64-99f4069bf651] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m12.0071461s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;": exit status 1 (319.1581ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:41:31.888921   11704 retry.go:31] will retry after 1.365978808s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;": exit status 1 (537.2357ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:41:33.796722   11704 retry.go:31] will retry after 1.988657103s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;": exit status 1 (195.5488ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:41:35.984882   11704 retry.go:31] will retry after 2.146386387s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;": exit status 1 (201.5441ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:41:38.338418   11704 retry.go:31] will retry after 2.751191666s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;": exit status 1 (224.332ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:41:41.318390   11704 retry.go:31] will retry after 6.641036701s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;": exit status 1 (198.3733ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:41:48.164370   11704 retry.go:31] will retry after 3.893446669s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-902700 exec mysql-6bcdcbc558-lpsks -- mysql -ppassword -e "show databases;"
E1216 04:42:01.791343   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:29.503285   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (93.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/11704/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh "sudo cat /etc/test/nested/copy/11704/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/11704.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh "sudo cat /etc/ssl/certs/11704.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/11704.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh "sudo cat /usr/share/ca-certificates/11704.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/117042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh "sudo cat /etc/ssl/certs/117042.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/117042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh "sudo cat /usr/share/ca-certificates/117042.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-902700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-902700 ssh "sudo systemctl is-active crio": exit status 1 (570.1362ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (1.3936939s)
--- PASS: TestFunctional/parallel/License (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-902700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-902700"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-902700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-902700": (8.3308612s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-902700 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-902700 docker-env | Invoke-Expression ; docker images": (2.2395997s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-902700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-902700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-902700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-902700 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 14248: OpenProcess: The parameter is incorrect.
helpers_test.go:526: unable to kill pid 6168: OpenProcess: The parameter is incorrect.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-902700 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-902700 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [811c12a0-c2a7-4d5e-b532-5162c601b44e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [811c12a0-c2a7-4d5e-b532-5162c601b44e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.0069429s
I1216 04:40:09.488788   11704 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-902700 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-902700 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-ncz6k" [efebfeeb-3227-4f1d-8b80-aa820137b3d6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-ncz6k" [efebfeeb-3227-4f1d-8b80-aa820137b3d6] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.0543467s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-902700 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-902700 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 256: OpenProcess: The parameter is incorrect.
helpers_test.go:526: unable to kill pid 12948: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "684.8585ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "171.7383ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "739.3541ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "173.2782ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 service list -o json
functional_test.go:1504: Took "856.5788ms" to run "out/minikube-windows-amd64.exe -p functional-902700 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-902700 service --namespace=default --https --url hello-node: exit status 1 (15.0094926s)

                                                
                                                
-- stdout --
	https://127.0.0.1:49174

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1532: found endpoint: https://127.0.0.1:49174
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 version --short
--- PASS: TestFunctional/parallel/Version/short (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-902700 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-902700
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-902700
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-902700 image ls --format short --alsologtostderr:
I1216 04:40:37.230770    2532 out.go:360] Setting OutFile to fd 1688 ...
I1216 04:40:37.278026    2532 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:37.278026    2532 out.go:374] Setting ErrFile to fd 1588...
I1216 04:40:37.278026    2532 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:37.297519    2532 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1216 04:40:37.297639    2532 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1216 04:40:37.304880    2532 cli_runner.go:164] Run: docker container inspect functional-902700 --format={{.State.Status}}
I1216 04:40:37.364177    2532 ssh_runner.go:195] Run: systemctl --version
I1216 04:40:37.367375    2532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-902700
I1216 04:40:37.421145    2532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65283 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-902700\id_rsa Username:docker}
I1216 04:40:37.546457    2532 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-902700 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ public.ecr.aws/nginx/nginx                  │ alpine            │ a236f84b9d5d2 │ 53.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2           │ 01e8bacf0f500 │ 74.9MB │
│ docker.io/kicbase/echo-server               │ functional-902700 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2           │ a5f569d49a979 │ 88MB   │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ docker.io/library/minikube-local-cache-test │ functional-902700 │ 36df0e5473a06 │ 30B    │
│ registry.k8s.io/kube-proxy                  │ v1.34.2           │ 8aa150647e88a │ 71.9MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ localhost/my-image                          │ functional-902700 │ 705c114e37535 │ 1.24MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2           │ 88320b5498ff2 │ 52.8MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-902700 image ls --format table --alsologtostderr:
I1216 04:40:50.632755    2840 out.go:360] Setting OutFile to fd 1576 ...
I1216 04:40:50.677757    2840 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:50.677757    2840 out.go:374] Setting ErrFile to fd 1544...
I1216 04:40:50.677757    2840 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:50.689769    2840 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1216 04:40:50.689769    2840 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1216 04:40:50.696762    2840 cli_runner.go:164] Run: docker container inspect functional-902700 --format={{.State.Status}}
I1216 04:40:50.758555    2840 ssh_runner.go:195] Run: systemctl --version
I1216 04:40:50.762367    2840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-902700
I1216 04:40:50.819210    2840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65283 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-902700\id_rsa Username:docker}
I1216 04:40:50.944140    2840 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-902700 image ls --format json --alsologtostderr:
[{"id":"705c114e3753537a7f5ad8f3a7ce707724c243d83e000da09b5c2286db128721","repoDigests":[],"repoTags":["localhost/my-image:functional-902700"],"size":"1240000"},{"id":"36df0e5473a06af0424b0257e31c0f49bdddee0d18d38ed573e389bd28e9dd23","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-902700"],"size":"30"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"88000000"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"52800000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-902700","docker.io/kicbase/echo-server:latest"],"siz
e":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"71900000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"74900000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819c
ae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-902700 image ls --format json --alsologtostderr:
I1216 04:40:51.087253    7224 out.go:360] Setting OutFile to fd 1920 ...
I1216 04:40:51.130344    7224 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:51.130344    7224 out.go:374] Setting ErrFile to fd 836...
I1216 04:40:51.130459    7224 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:51.143964    7224 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1216 04:40:51.144625    7224 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1216 04:40:51.150954    7224 cli_runner.go:164] Run: docker container inspect functional-902700 --format={{.State.Status}}
I1216 04:40:51.209620    7224 ssh_runner.go:195] Run: systemctl --version
I1216 04:40:51.213620    7224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-902700
I1216 04:40:51.270769    7224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65283 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-902700\id_rsa Username:docker}
I1216 04:40:51.414009    7224 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-902700 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 705c114e3753537a7f5ad8f3a7ce707724c243d83e000da09b5c2286db128721
repoDigests: []
repoTags:
- localhost/my-image:functional-902700
size: "1240000"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "52800000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-902700
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 36df0e5473a06af0424b0257e31c0f49bdddee0d18d38ed573e389bd28e9dd23
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-902700
size: "30"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "71900000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "88000000"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "74900000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-902700 image ls --format yaml --alsologtostderr:
I1216 04:40:50.196364   11032 out.go:360] Setting OutFile to fd 688 ...
I1216 04:40:50.240862   11032 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:50.240862   11032 out.go:374] Setting ErrFile to fd 1824...
I1216 04:40:50.240862   11032 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:50.252851   11032 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1216 04:40:50.252987   11032 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1216 04:40:50.259842   11032 cli_runner.go:164] Run: docker container inspect functional-902700 --format={{.State.Status}}
I1216 04:40:50.316073   11032 ssh_runner.go:195] Run: systemctl --version
I1216 04:40:50.319080   11032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-902700
I1216 04:40:50.370070   11032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65283 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-902700\id_rsa Username:docker}
I1216 04:40:50.501379   11032 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (12.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-902700 ssh pgrep buildkitd: exit status 1 (539.0043ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image build -t localhost/my-image:functional-902700 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-902700 image build -t localhost/my-image:functional-902700 testdata\build --alsologtostderr: (11.5306255s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-902700 image build -t localhost/my-image:functional-902700 testdata\build --alsologtostderr:
I1216 04:40:38.214784   11660 out.go:360] Setting OutFile to fd 1732 ...
I1216 04:40:38.275232   11660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:38.275232   11660 out.go:374] Setting ErrFile to fd 1948...
I1216 04:40:38.275232   11660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:38.289175   11660 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1216 04:40:38.312600   11660 config.go:182] Loaded profile config "functional-902700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1216 04:40:38.323644   11660 cli_runner.go:164] Run: docker container inspect functional-902700 --format={{.State.Status}}
I1216 04:40:38.380599   11660 ssh_runner.go:195] Run: systemctl --version
I1216 04:40:38.383597   11660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-902700
I1216 04:40:38.432595   11660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65283 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-902700\id_rsa Username:docker}
I1216 04:40:38.550149   11660 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.846911659.tar
I1216 04:40:38.554764   11660 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 04:40:38.575121   11660 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.846911659.tar
I1216 04:40:38.583173   11660 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.846911659.tar: stat -c "%s %y" /var/lib/minikube/build/build.846911659.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.846911659.tar': No such file or directory
I1216 04:40:38.583173   11660 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.846911659.tar --> /var/lib/minikube/build/build.846911659.tar (3072 bytes)
I1216 04:40:38.619257   11660 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.846911659
I1216 04:40:38.637999   11660 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.846911659 -xf /var/lib/minikube/build/build.846911659.tar
I1216 04:40:38.651294   11660 docker.go:361] Building image: /var/lib/minikube/build/build.846911659
I1216 04:40:38.654663   11660 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-902700 /var/lib/minikube/build/build.846911659
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 4.1s
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 4.4s
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 4.4s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#4 DONE 5.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 2.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.9s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:705c114e3753537a7f5ad8f3a7ce707724c243d83e000da09b5c2286db128721
#8 writing image sha256:705c114e3753537a7f5ad8f3a7ce707724c243d83e000da09b5c2286db128721 done
#8 naming to localhost/my-image:functional-902700 0.0s done
#8 DONE 0.3s
I1216 04:40:49.603704   11660 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-902700 /var/lib/minikube/build/build.846911659: (10.9490007s)
I1216 04:40:49.612385   11660 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.846911659
I1216 04:40:49.630985   11660 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.846911659.tar
I1216 04:40:49.645324   11660 build_images.go:218] Built localhost/my-image:functional-902700 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.846911659.tar
I1216 04:40:49.645324   11660 build_images.go:134] succeeded building to: functional-902700
I1216 04:40:49.645324   11660 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (12.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.7109454s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-902700
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image load --daemon kicbase/echo-server:functional-902700 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-902700 image load --daemon kicbase/echo-server:functional-902700 --alsologtostderr: (2.7896878s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image load --daemon kicbase/echo-server:functional-902700 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-902700 image load --daemon kicbase/echo-server:functional-902700 --alsologtostderr: (2.3795997s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-902700
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image load --daemon kicbase/echo-server:functional-902700 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-902700 image load --daemon kicbase/echo-server:functional-902700 --alsologtostderr: (2.3536225s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image save kicbase/echo-server:functional-902700 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image rm kicbase/echo-server:functional-902700 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-902700 service hello-node --url --format={{.IP}}: exit status 1 (15.0409307s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-902700
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 image save --daemon kicbase/echo-server:functional-902700 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-902700
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-902700 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-902700 service hello-node --url: exit status 1 (15.0104682s)

                                                
                                                
-- stdout --
	http://127.0.0.1:49260

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1575: found endpoint for hello-node: http://127.0.0.1:49260
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-902700
--- PASS: TestFunctional/delete_echo-server_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-902700
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-902700
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11704\hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (9.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 cache add registry.k8s.io/pause:3.1: (3.4978401s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 cache add registry.k8s.io/pause:3.3: (3.0427067s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 cache add registry.k8s.io/pause:latest: (3.1352448s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (9.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (3.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-002200 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1016712311\001
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 cache add minikube-local-cache-test:functional-002200
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 cache add minikube-local-cache-test:functional-002200: (2.5298549s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 cache delete minikube-local-cache-test:functional-002200
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-002200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (3.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (4.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (559.954ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 cache reload: (2.7802686s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (4.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs: (1.2997308s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3133545609\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3133545609\001\logs.txt: (1.3161371s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 config get cpus: exit status 14 (165.0275ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 config get cpus: exit status 14 (151.5863ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (1.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-002200 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-002200 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (632.072ms)

                                                
                                                
-- stdout --
	* [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:17:00.847892    6088 out.go:360] Setting OutFile to fd 1968 ...
	I1216 05:17:00.890298    6088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:00.890298    6088 out.go:374] Setting ErrFile to fd 1036...
	I1216 05:17:00.890298    6088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:00.904121    6088 out.go:368] Setting JSON to false
	I1216 05:17:00.906458    6088 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3242,"bootTime":1765858978,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 05:17:00.906458    6088 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 05:17:00.909272    6088 out.go:179] * [functional-002200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 05:17:00.911823    6088 notify.go:221] Checking for updates...
	I1216 05:17:00.911823    6088 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 05:17:00.913894    6088 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:17:00.916021    6088 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 05:17:00.917801    6088 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:17:00.920660    6088 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:17:00.924232    6088 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:17:00.925014    6088 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:17:01.048604    6088 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 05:17:01.052167    6088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:17:01.292802    6088 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 05:17:01.274354273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:17:01.310740    6088 out.go:179] * Using the docker driver based on existing profile
	I1216 05:17:01.313368    6088 start.go:309] selected driver: docker
	I1216 05:17:01.313405    6088 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:17:01.313521    6088 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:17:01.362235    6088 out.go:203] 
	W1216 05:17:01.365241    6088 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 05:17:01.367332    6088 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-002200 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0
E1216 05:17:01.804830   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (1.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-002200 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-002200 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (611.9903ms)

                                                
                                                
-- stdout --
	* [functional-002200] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:17:06.453329    5816 out.go:360] Setting OutFile to fd 1532 ...
	I1216 05:17:06.497636    5816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:06.497636    5816 out.go:374] Setting ErrFile to fd 476...
	I1216 05:17:06.497636    5816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:17:06.512261    5816 out.go:368] Setting JSON to false
	I1216 05:17:06.515710    5816 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3248,"bootTime":1765858978,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1216 05:17:06.515840    5816 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1216 05:17:06.519311    5816 out.go:179] * [functional-002200] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1216 05:17:06.523675    5816 notify.go:221] Checking for updates...
	I1216 05:17:06.523724    5816 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1216 05:17:06.526347    5816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:17:06.529287    5816 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1216 05:17:06.531703    5816 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:17:06.533890    5816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:17:06.536576    5816 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1216 05:17:06.537778    5816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:17:06.656791    5816 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1216 05:17:06.660998    5816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 05:17:06.894343    5816 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-16 05:17:06.877354472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1216 05:17:06.902669    5816 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1216 05:17:06.905232    5816 start.go:309] selected driver: docker
	I1216 05:17:06.905267    5816 start.go:927] validating driver "docker" against &{Name:functional-002200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-002200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:17:06.905384    5816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:17:06.943599    5816 out.go:203] 
	W1216 05:17:06.945840    5816 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 05:17:06.948775    5816 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (3.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh -n functional-002200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 cp functional-002200:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1554950727\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh -n functional-002200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh -n functional-002200 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (3.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/11704/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh "sudo cat /etc/test/nested/copy/11704/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (3.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/11704.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh "sudo cat /etc/ssl/certs/11704.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/11704.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh "sudo cat /usr/share/ca-certificates/11704.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/117042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh "sudo cat /etc/ssl/certs/117042.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/117042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh "sudo cat /usr/share/ca-certificates/117042.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (3.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 ssh "sudo systemctl is-active crio": exit status 1 (551.272ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (2.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (2.7049026s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (2.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-002200 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-002200
docker.io/kicbase/echo-server:functional-002200
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-002200 image ls --format short --alsologtostderr:
I1216 05:17:09.547510    9252 out.go:360] Setting OutFile to fd 1044 ...
I1216 05:17:09.593769    9252 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:17:09.594314    9252 out.go:374] Setting ErrFile to fd 1196...
I1216 05:17:09.594314    9252 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:17:09.604610    9252 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1216 05:17:09.605777    9252 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1216 05:17:09.612110    9252 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
I1216 05:17:09.672609    9252 ssh_runner.go:195] Run: systemctl --version
I1216 05:17:09.675772    9252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
I1216 05:17:09.728831    9252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
I1216 05:17:09.856971    9252 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-002200 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ localhost/my-image                          │ functional-002200 │ ac80a1beb65e4 │ 1.24MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 51.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 70.7MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ docker.io/kicbase/echo-server               │ functional-002200 │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/minikube-local-cache-test │ functional-002200 │ 36df0e5473a06 │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ aa9d02839d8de │ 89.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ 45f3cc72d235f │ 75.8MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-002200 image ls --format table --alsologtostderr:
I1216 05:17:15.776221    1764 out.go:360] Setting OutFile to fd 1756 ...
I1216 05:17:15.829514    1764 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:17:15.829514    1764 out.go:374] Setting ErrFile to fd 1040...
I1216 05:17:15.829514    1764 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:17:15.846379    1764 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1216 05:17:15.846701    1764 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1216 05:17:15.852968    1764 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
I1216 05:17:15.909599    1764 ssh_runner.go:195] Run: systemctl --version
I1216 05:17:15.912434    1764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
I1216 05:17:15.968441    1764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
I1216 05:17:16.097430    1764 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-002200 image ls --format json --alsologtostderr:
[{"id":"ac80a1beb65e4769efafbd53009fb709ca9f0057f40a5e4662ec9cb2f0861ca3","repoDigests":[],"repoTags":["localhost/my-image:functional-002200"],"size":"1240000"},{"id":"36df0e5473a06af0424b0257e31c0f49bdddee0d18d38ed573e389bd28e9dd23","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-002200"],"size":"30"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"51700000"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"75800000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-002200"],"size":"4940000"},{"id
":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"89700000"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"70700000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e730
5ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-002200 image ls --format json --alsologtostderr:
I1216 05:17:15.311779    1328 out.go:360] Setting OutFile to fd 1216 ...
I1216 05:17:15.356253    1328 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:17:15.356253    1328 out.go:374] Setting ErrFile to fd 1196...
I1216 05:17:15.356253    1328 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:17:15.367724    1328 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1216 05:17:15.367724    1328 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1216 05:17:15.374911    1328 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
I1216 05:17:15.435698    1328 ssh_runner.go:195] Run: systemctl --version
I1216 05:17:15.438468    1328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
I1216 05:17:15.492624    1328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
I1216 05:17:15.627074    1328 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-002200 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 36df0e5473a06af0424b0257e31c0f49bdddee0d18d38ed573e389bd28e9dd23
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-002200
size: "30"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "51700000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "89700000"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "70700000"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "75800000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-002200
size: "4940000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-002200 image ls --format yaml --alsologtostderr:
I1216 05:17:10.001856    5084 out.go:360] Setting OutFile to fd 1440 ...
I1216 05:17:10.048121    5084 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:17:10.048121    5084 out.go:374] Setting ErrFile to fd 1928...
I1216 05:17:10.048121    5084 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:17:10.059563    5084 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1216 05:17:10.059683    5084 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1216 05:17:10.066998    5084 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
I1216 05:17:10.128651    5084 ssh_runner.go:195] Run: systemctl --version
I1216 05:17:10.133602    5084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
I1216 05:17:10.190493    5084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
I1216 05:17:10.315090    5084 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-002200 ssh pgrep buildkitd: exit status 1 (494.4999ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image build -t localhost/my-image:functional-002200 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 image build -t localhost/my-image:functional-002200 testdata\build --alsologtostderr: (3.9186875s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-002200 image build -t localhost/my-image:functional-002200 testdata\build --alsologtostderr:
I1216 05:17:10.951132    5548 out.go:360] Setting OutFile to fd 1208 ...
I1216 05:17:10.994344    5548 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:17:10.994344    5548 out.go:374] Setting ErrFile to fd 1124...
I1216 05:17:10.994344    5548 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 05:17:11.012483    5548 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1216 05:17:11.015442    5548 config.go:182] Loaded profile config "functional-002200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1216 05:17:11.022440    5548 cli_runner.go:164] Run: docker container inspect functional-002200 --format={{.State.Status}}
I1216 05:17:11.082805    5548 ssh_runner.go:195] Run: systemctl --version
I1216 05:17:11.089069    5548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002200
I1216 05:17:11.145673    5548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49317 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-002200\id_rsa Username:docker}
I1216 05:17:11.281650    5548 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.2903491756.tar
I1216 05:17:11.286595    5548 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 05:17:11.303558    5548 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2903491756.tar
I1216 05:17:11.311268    5548 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2903491756.tar: stat -c "%s %y" /var/lib/minikube/build/build.2903491756.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2903491756.tar': No such file or directory
I1216 05:17:11.311268    5548 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.2903491756.tar --> /var/lib/minikube/build/build.2903491756.tar (3072 bytes)
I1216 05:17:11.341165    5548 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2903491756
I1216 05:17:11.357071    5548 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2903491756 -xf /var/lib/minikube/build/build.2903491756.tar
I1216 05:17:11.368662    5548 docker.go:361] Building image: /var/lib/minikube/build/build.2903491756
I1216 05:17:11.372563    5548 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-002200 /var/lib/minikube/build/build.2903491756
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:ac80a1beb65e4769efafbd53009fb709ca9f0057f40a5e4662ec9cb2f0861ca3 done
#8 naming to localhost/my-image:functional-002200 0.0s done
#8 DONE 0.2s
I1216 05:17:14.724108    5548 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-002200 /var/lib/minikube/build/build.2903491756: (3.3515141s)
I1216 05:17:14.728890    5548 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2903491756
I1216 05:17:14.746707    5548 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2903491756.tar
I1216 05:17:14.760443    5548 build_images.go:218] Built localhost/my-image:functional-002200 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.2903491756.tar
I1216 05:17:14.760443    5548 build_images.go:134] succeeded building to: functional-002200
I1216 05:17:14.760443    5548 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-002200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr: (3.0498977s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr: (2.4865316s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-002200
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-002200 image load --daemon kicbase/echo-server:functional-002200 --alsologtostderr: (2.3729134s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image save kicbase/echo-server:functional-002200 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image rm kicbase/echo-server:functional-002200 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-002200
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-002200 image save --daemon kicbase/echo-server:functional-002200 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-002200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-002200 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-002200 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "637.2741ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "160.7629ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "639.5198ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "153.9778ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-002200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-002200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-002200
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (217.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker
E1216 05:21:18.282470   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:18.289232   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:18.300509   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:18.322321   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:18.363484   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:18.445242   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:18.606740   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:18.929054   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:19.571191   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:20.852985   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:23.414563   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:28.537502   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:38.779982   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:21:59.263222   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:22:01.807808   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:22:40.225838   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:22:58.754041   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:24:02.148461   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker: (3m36.2193182s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5: (1.5785522s)
--- PASS: TestMultiControlPlane/serial/StartCluster (217.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 kubectl -- rollout status deployment/busybox: (4.3684016s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-b4rmt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-tss8z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-v9hcs -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-b4rmt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-tss8z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-v9hcs -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-b4rmt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-tss8z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-v9hcs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-b4rmt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-b4rmt -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-tss8z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-tss8z -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-v9hcs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 kubectl -- exec busybox-7b57f96db7-v9hcs -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 node add --alsologtostderr -v 5
E1216 05:24:55.683310   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 node add --alsologtostderr -v 5: (53.2159064s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5: (1.9319336s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-628200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9505879s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (33.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 status --output json --alsologtostderr -v 5: (1.8944527s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp testdata\cp-test.txt ha-628200:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile913262778\001\cp-test_ha-628200.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200:/home/docker/cp-test.txt ha-628200-m02:/home/docker/cp-test_ha-628200_ha-628200-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m02 "sudo cat /home/docker/cp-test_ha-628200_ha-628200-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200:/home/docker/cp-test.txt ha-628200-m03:/home/docker/cp-test_ha-628200_ha-628200-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m03 "sudo cat /home/docker/cp-test_ha-628200_ha-628200-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200:/home/docker/cp-test.txt ha-628200-m04:/home/docker/cp-test_ha-628200_ha-628200-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m04 "sudo cat /home/docker/cp-test_ha-628200_ha-628200-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp testdata\cp-test.txt ha-628200-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile913262778\001\cp-test_ha-628200-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m02:/home/docker/cp-test.txt ha-628200:/home/docker/cp-test_ha-628200-m02_ha-628200.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200 "sudo cat /home/docker/cp-test_ha-628200-m02_ha-628200.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m02:/home/docker/cp-test.txt ha-628200-m03:/home/docker/cp-test_ha-628200-m02_ha-628200-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m03 "sudo cat /home/docker/cp-test_ha-628200-m02_ha-628200-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m02:/home/docker/cp-test.txt ha-628200-m04:/home/docker/cp-test_ha-628200-m02_ha-628200-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m04 "sudo cat /home/docker/cp-test_ha-628200-m02_ha-628200-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp testdata\cp-test.txt ha-628200-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile913262778\001\cp-test_ha-628200-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m03:/home/docker/cp-test.txt ha-628200:/home/docker/cp-test_ha-628200-m03_ha-628200.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200 "sudo cat /home/docker/cp-test_ha-628200-m03_ha-628200.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m03:/home/docker/cp-test.txt ha-628200-m02:/home/docker/cp-test_ha-628200-m03_ha-628200-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m02 "sudo cat /home/docker/cp-test_ha-628200-m03_ha-628200-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m03:/home/docker/cp-test.txt ha-628200-m04:/home/docker/cp-test_ha-628200-m03_ha-628200-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m04 "sudo cat /home/docker/cp-test_ha-628200-m03_ha-628200-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp testdata\cp-test.txt ha-628200-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile913262778\001\cp-test_ha-628200-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m04:/home/docker/cp-test.txt ha-628200:/home/docker/cp-test_ha-628200-m04_ha-628200.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200 "sudo cat /home/docker/cp-test_ha-628200-m04_ha-628200.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m04:/home/docker/cp-test.txt ha-628200-m02:/home/docker/cp-test_ha-628200-m04_ha-628200-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m02 "sudo cat /home/docker/cp-test_ha-628200-m04_ha-628200-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 cp ha-628200-m04:/home/docker/cp-test.txt ha-628200-m03:/home/docker/cp-test_ha-628200-m04_ha-628200-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 ssh -n ha-628200-m03 "sudo cat /home/docker/cp-test_ha-628200-m04_ha-628200-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (33.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 node stop m02 --alsologtostderr -v 5: (11.8863633s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5: exit status 7 (1.5013226s)

                                                
                                                
-- stdout --
	ha-628200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-628200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-628200-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-628200-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:26:02.213895    4368 out.go:360] Setting OutFile to fd 1688 ...
	I1216 05:26:02.256729    4368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:26:02.256729    4368 out.go:374] Setting ErrFile to fd 2020...
	I1216 05:26:02.256729    4368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:26:02.268069    4368 out.go:368] Setting JSON to false
	I1216 05:26:02.268149    4368 mustload.go:66] Loading cluster: ha-628200
	I1216 05:26:02.268149    4368 notify.go:221] Checking for updates...
	I1216 05:26:02.268721    4368 config.go:182] Loaded profile config "ha-628200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 05:26:02.268805    4368 status.go:174] checking status of ha-628200 ...
	I1216 05:26:02.275900    4368 cli_runner.go:164] Run: docker container inspect ha-628200 --format={{.State.Status}}
	I1216 05:26:02.335228    4368 status.go:371] ha-628200 host status = "Running" (err=<nil>)
	I1216 05:26:02.335228    4368 host.go:66] Checking if "ha-628200" exists ...
	I1216 05:26:02.339639    4368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-628200
	I1216 05:26:02.396899    4368 host.go:66] Checking if "ha-628200" exists ...
	I1216 05:26:02.400903    4368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:26:02.403903    4368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-628200
	I1216 05:26:02.460488    4368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51007 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-628200\id_rsa Username:docker}
	I1216 05:26:02.600162    4368 ssh_runner.go:195] Run: systemctl --version
	I1216 05:26:02.614847    4368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:26:02.637007    4368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-628200
	I1216 05:26:02.693297    4368 kubeconfig.go:125] found "ha-628200" server: "https://127.0.0.1:51011"
	I1216 05:26:02.693397    4368 api_server.go:166] Checking apiserver status ...
	I1216 05:26:02.697777    4368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:02.722904    4368 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2365/cgroup
	I1216 05:26:02.735814    4368 api_server.go:182] apiserver freezer: "7:freezer:/docker/4acc1223f84f2e5a6a6453de0d4456254306be095eb9dddb29e28c8a5a2723c0/kubepods/burstable/pod40f2087169d665863bc517a96f4eed88/564ccac1938851ebb82c812805dbbeaa673f52cc8d0df8e898c128196c8d5031"
	I1216 05:26:02.739657    4368 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4acc1223f84f2e5a6a6453de0d4456254306be095eb9dddb29e28c8a5a2723c0/kubepods/burstable/pod40f2087169d665863bc517a96f4eed88/564ccac1938851ebb82c812805dbbeaa673f52cc8d0df8e898c128196c8d5031/freezer.state
	I1216 05:26:02.753701    4368 api_server.go:204] freezer state: "THAWED"
	I1216 05:26:02.753701    4368 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51011/healthz ...
	I1216 05:26:02.765413    4368 api_server.go:279] https://127.0.0.1:51011/healthz returned 200:
	ok
	I1216 05:26:02.765413    4368 status.go:463] ha-628200 apiserver status = Running (err=<nil>)
	I1216 05:26:02.765413    4368 status.go:176] ha-628200 status: &{Name:ha-628200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:26:02.765502    4368 status.go:174] checking status of ha-628200-m02 ...
	I1216 05:26:02.773322    4368 cli_runner.go:164] Run: docker container inspect ha-628200-m02 --format={{.State.Status}}
	I1216 05:26:02.825068    4368 status.go:371] ha-628200-m02 host status = "Stopped" (err=<nil>)
	I1216 05:26:02.825068    4368 status.go:384] host is not running, skipping remaining checks
	I1216 05:26:02.825068    4368 status.go:176] ha-628200-m02 status: &{Name:ha-628200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:26:02.825133    4368 status.go:174] checking status of ha-628200-m03 ...
	I1216 05:26:02.832387    4368 cli_runner.go:164] Run: docker container inspect ha-628200-m03 --format={{.State.Status}}
	I1216 05:26:02.888906    4368 status.go:371] ha-628200-m03 host status = "Running" (err=<nil>)
	I1216 05:26:02.888906    4368 host.go:66] Checking if "ha-628200-m03" exists ...
	I1216 05:26:02.892972    4368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-628200-m03
	I1216 05:26:02.948459    4368 host.go:66] Checking if "ha-628200-m03" exists ...
	I1216 05:26:02.953796    4368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:26:02.957135    4368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-628200-m03
	I1216 05:26:03.009803    4368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51126 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-628200-m03\id_rsa Username:docker}
	I1216 05:26:03.130223    4368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:26:03.154670    4368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-628200
	I1216 05:26:03.211958    4368 kubeconfig.go:125] found "ha-628200" server: "https://127.0.0.1:51011"
	I1216 05:26:03.211958    4368 api_server.go:166] Checking apiserver status ...
	I1216 05:26:03.216171    4368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:26:03.242277    4368 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2279/cgroup
	I1216 05:26:03.254314    4368 api_server.go:182] apiserver freezer: "7:freezer:/docker/2114ec62b2bb891c7bea1249c2b8e468ee98922716a089b5e8f4cb6e56be6e73/kubepods/burstable/pod3c4b6a792f5b39226cd221382f4b8f8b/018863624abad38425ebcc71cf6f6372f0b203fad37c9f8c6eafae3b3029012b"
	I1216 05:26:03.258524    4368 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2114ec62b2bb891c7bea1249c2b8e468ee98922716a089b5e8f4cb6e56be6e73/kubepods/burstable/pod3c4b6a792f5b39226cd221382f4b8f8b/018863624abad38425ebcc71cf6f6372f0b203fad37c9f8c6eafae3b3029012b/freezer.state
	I1216 05:26:03.273394    4368 api_server.go:204] freezer state: "THAWED"
	I1216 05:26:03.273394    4368 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51011/healthz ...
	I1216 05:26:03.282901    4368 api_server.go:279] https://127.0.0.1:51011/healthz returned 200:
	ok
	I1216 05:26:03.282901    4368 status.go:463] ha-628200-m03 apiserver status = Running (err=<nil>)
	I1216 05:26:03.282901    4368 status.go:176] ha-628200-m03 status: &{Name:ha-628200-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:26:03.282901    4368 status.go:174] checking status of ha-628200-m04 ...
	I1216 05:26:03.290459    4368 cli_runner.go:164] Run: docker container inspect ha-628200-m04 --format={{.State.Status}}
	I1216 05:26:03.345592    4368 status.go:371] ha-628200-m04 host status = "Running" (err=<nil>)
	I1216 05:26:03.345592    4368 host.go:66] Checking if "ha-628200-m04" exists ...
	I1216 05:26:03.351854    4368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-628200-m04
	I1216 05:26:03.416602    4368 host.go:66] Checking if "ha-628200-m04" exists ...
	I1216 05:26:03.421910    4368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:26:03.424629    4368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-628200-m04
	I1216 05:26:03.481539    4368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51263 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-628200-m04\id_rsa Username:docker}
	I1216 05:26:03.599393    4368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:26:03.615943    4368 status.go:176] ha-628200-m04 status: &{Name:ha-628200-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5465914s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (103.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 node start m02 --alsologtostderr -v 5
E1216 05:26:18.285285   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:26:44.888946   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:26:45.992130   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:27:01.811009   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 node start m02 --alsologtostderr -v 5: (1m41.4797195s)
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5: (1.9053569s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (103.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.973089s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (168.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 stop --alsologtostderr -v 5: (38.771375s)
ha_test.go:469: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 start --wait true --alsologtostderr -v 5
E1216 05:29:55.686054   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 start --wait true --alsologtostderr -v 5: (2m9.3283248s)
ha_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (168.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (14.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 node delete m03 --alsologtostderr -v 5: (12.8354401s)
ha_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5: (1.4160113s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (14.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4628556s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 stop --alsologtostderr -v 5
E1216 05:31:18.288317   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 stop --alsologtostderr -v 5: (36.9302536s)
ha_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5: exit status 7 (337.9765ms)

                                                
                                                
-- stdout --
	ha-628200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-628200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-628200-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:31:32.213014   12136 out.go:360] Setting OutFile to fd 1728 ...
	I1216 05:31:32.258824   12136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:31:32.258824   12136 out.go:374] Setting ErrFile to fd 1552...
	I1216 05:31:32.258824   12136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:31:32.271004   12136 out.go:368] Setting JSON to false
	I1216 05:31:32.271004   12136 mustload.go:66] Loading cluster: ha-628200
	I1216 05:31:32.271004   12136 notify.go:221] Checking for updates...
	I1216 05:31:32.271588   12136 config.go:182] Loaded profile config "ha-628200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 05:31:32.271588   12136 status.go:174] checking status of ha-628200 ...
	I1216 05:31:32.280310   12136 cli_runner.go:164] Run: docker container inspect ha-628200 --format={{.State.Status}}
	I1216 05:31:32.332595   12136 status.go:371] ha-628200 host status = "Stopped" (err=<nil>)
	I1216 05:31:32.332595   12136 status.go:384] host is not running, skipping remaining checks
	I1216 05:31:32.332595   12136 status.go:176] ha-628200 status: &{Name:ha-628200 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:31:32.332595   12136 status.go:174] checking status of ha-628200-m02 ...
	I1216 05:31:32.339587   12136 cli_runner.go:164] Run: docker container inspect ha-628200-m02 --format={{.State.Status}}
	I1216 05:31:32.396671   12136 status.go:371] ha-628200-m02 host status = "Stopped" (err=<nil>)
	I1216 05:31:32.396738   12136 status.go:384] host is not running, skipping remaining checks
	I1216 05:31:32.396738   12136 status.go:176] ha-628200-m02 status: &{Name:ha-628200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:31:32.396772   12136 status.go:174] checking status of ha-628200-m04 ...
	I1216 05:31:32.403657   12136 cli_runner.go:164] Run: docker container inspect ha-628200-m04 --format={{.State.Status}}
	I1216 05:31:32.458061   12136 status.go:371] ha-628200-m04 host status = "Stopped" (err=<nil>)
	I1216 05:31:32.458061   12136 status.go:384] host is not running, skipping remaining checks
	I1216 05:31:32.458061   12136 status.go:176] ha-628200-m04 status: &{Name:ha-628200-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (84.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 start --wait true --alsologtostderr -v 5 --driver=docker
E1216 05:32:01.814280   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 start --wait true --alsologtostderr -v 5 --driver=docker: (1m22.3950044s)
ha_test.go:568: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5: (1.4355911s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (84.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4899812s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (100.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 node add --control-plane --alsologtostderr -v 5: (1m38.6143957s)
ha_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-windows-amd64.exe -p ha-628200 status --alsologtostderr -v 5: (1.9044931s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (100.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9403813s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.94s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (49.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-658500 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-658500 --driver=docker: (49.1845811s)
--- PASS: TestImageBuild/serial/Setup (49.18s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-658500
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-658500: (4.6404533s)
--- PASS: TestImageBuild/serial/NormalBuild (4.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-658500
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-658500: (2.1623932s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.16s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-658500
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-658500: (1.3331388s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-658500
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-658500: (1.288009s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.29s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-147100 --output=json --user=testUser --memory=3072 --wait=true --driver=docker
E1216 05:36:18.291975   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:01.817683   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-147100 --output=json --user=testUser --memory=3072 --wait=true --driver=docker: (1m21.6282933s)
--- PASS: TestJSONOutput/start/Command (81.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.14s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-147100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-147100 --output=json --user=testUser: (1.1387932s)
--- PASS: TestJSONOutput/pause/Command (1.14s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.94s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-147100 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.94s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.18s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-147100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-147100 --output=json --user=testUser: (12.1788048s)
--- PASS: TestJSONOutput/stop/Command (12.18s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.65s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-030200 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-030200 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (195.1309ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b629a0e0-d1c2-4c72-998b-87454151ea82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-030200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"323c565f-02ae-48b0-9956-824046153a35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"d5214830-e62f-42fa-a952-15552613f189","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"450f8dc9-355b-43fc-9aa8-78891390bed4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"0071edc1-d5d9-4fe3-a9fe-20fad11f3357","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22141"}}
	{"specversion":"1.0","id":"9e965d00-2ab3-40c8-8443-eeeeb491a0f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9f130850-2034-4837-8d25-c94a72ddc8be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-030200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-030200
--- PASS: TestErrorJSONOutput (0.65s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (53.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-753000 --network=
E1216 05:37:41.362204   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-753000 --network=: (49.7906577s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-753000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-753000
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-753000: (3.560257s)
--- PASS: TestKicCustomNetwork/create_custom_network (53.41s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (51.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-543400 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-543400 --network=bridge: (48.690232s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-543400" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-543400
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-543400: (3.1509074s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (51.90s)

                                                
                                    
x
+
TestKicExistingNetwork (53.92s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1216 05:39:24.983271   11704 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1216 05:39:25.037505   11704 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1216 05:39:25.040505   11704 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1216 05:39:25.040505   11704 cli_runner.go:164] Run: docker network inspect existing-network
W1216 05:39:25.092508   11704 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1216 05:39:25.092508   11704 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1216 05:39:25.092508   11704 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1216 05:39:25.095511   11704 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 05:39:25.164506   11704 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016d4270}
I1216 05:39:25.164506   11704 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1216 05:39:25.167506   11704 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W1216 05:39:25.227507   11704 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W1216 05:39:25.227507   11704 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1216 05:39:25.227507   11704 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I1216 05:39:25.244507   11704 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1216 05:39:25.257517   11704 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016d4ae0}
I1216 05:39:25.257517   11704 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1216 05:39:25.261507   11704 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1216 05:39:25.409702   11704 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-702900 --network=existing-network
E1216 05:39:38.766066   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:39:55.692842   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-702900 --network=existing-network: (50.1895386s)
helpers_test.go:176: Cleaning up "existing-network-702900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-702900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-702900: (3.1782003s)
I1216 05:40:18.843794   11704 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (53.92s)

                                                
                                    
x
+
TestKicCustomSubnet (53.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-239600 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-239600 --subnet=192.168.60.0/24: (49.6393849s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-239600 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-239600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-239600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-239600: (3.5333997s)
--- PASS: TestKicCustomSubnet (53.24s)

                                                
                                    
x
+
TestKicStaticIP (56.69s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-593900 --static-ip=192.168.200.200
E1216 05:41:18.294557   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:42:01.820117   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-593900 --static-ip=192.168.200.200: (52.3922037s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-593900 ip
helpers_test.go:176: Cleaning up "static-ip-593900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-593900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-593900: (3.9701491s)
--- PASS: TestKicStaticIP (56.69s)

                                                
                                    
x
+
TestMainNoArgs (0.16s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.16s)

                                                
                                    
x
+
TestMinikubeProfile (100.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-128600 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-128600 --driver=docker: (46.2564342s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-128600 --driver=docker
E1216 05:43:24.901832   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-128600 --driver=docker: (44.4957134s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-128600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.1509846s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-128600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.1823249s)
helpers_test.go:176: Cleaning up "second-128600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-128600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-128600: (3.6176327s)
helpers_test.go:176: Cleaning up "first-128600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-128600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-128600: (3.6607834s)
--- PASS: TestMinikubeProfile (100.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (13.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-654200 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3345142523\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-654200 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3345142523\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (12.8297519s)
--- PASS: TestMountStart/serial/StartWithMountFirst (13.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.57s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-654200 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (13.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-654200 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3345142523\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-654200 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3345142523\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (12.3690445s)
--- PASS: TestMountStart/serial/StartWithMountSecond (13.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.54s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-654200 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.54s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.43s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-654200 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-654200 --alsologtostderr -v=5: (2.4294959s)
--- PASS: TestMountStart/serial/DeleteFirst (2.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.51s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-654200 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.51s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-654200
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-654200: (1.8735369s)
--- PASS: TestMountStart/serial/Stop (1.87s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (10.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-654200
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-654200: (9.7357563s)
--- PASS: TestMountStart/serial/RestartStopped (10.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.51s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-654200 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.51s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (129.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-817700 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker
E1216 05:44:55.696049   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:46:18.298156   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-817700 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker: (2m8.9315943s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (129.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- rollout status deployment/busybox: (3.8704174s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- exec busybox-7b57f96db7-pns9v -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- exec busybox-7b57f96db7-tjfjb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- exec busybox-7b57f96db7-pns9v -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- exec busybox-7b57f96db7-tjfjb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- exec busybox-7b57f96db7-pns9v -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- exec busybox-7b57f96db7-tjfjb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- exec busybox-7b57f96db7-pns9v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- exec busybox-7b57f96db7-pns9v -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- exec busybox-7b57f96db7-tjfjb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-817700 -- exec busybox-7b57f96db7-tjfjb -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-817700 -v=5 --alsologtostderr
E1216 05:47:01.824455   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-817700 -v=5 --alsologtostderr: (52.1511456s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817700 status --alsologtostderr: (1.2873642s)
--- PASS: TestMultiNode/serial/AddNode (53.44s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-817700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.3699718s)
--- PASS: TestMultiNode/serial/ProfileList (1.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (18.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817700 status --output json --alsologtostderr: (1.2794695s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp testdata\cp-test.txt multinode-817700:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp multinode-817700:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1941091681\001\cp-test_multinode-817700.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp multinode-817700:/home/docker/cp-test.txt multinode-817700-m02:/home/docker/cp-test_multinode-817700_multinode-817700-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m02 "sudo cat /home/docker/cp-test_multinode-817700_multinode-817700-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp multinode-817700:/home/docker/cp-test.txt multinode-817700-m03:/home/docker/cp-test_multinode-817700_multinode-817700-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m03 "sudo cat /home/docker/cp-test_multinode-817700_multinode-817700-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp testdata\cp-test.txt multinode-817700-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp multinode-817700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1941091681\001\cp-test_multinode-817700-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp multinode-817700-m02:/home/docker/cp-test.txt multinode-817700:/home/docker/cp-test_multinode-817700-m02_multinode-817700.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700 "sudo cat /home/docker/cp-test_multinode-817700-m02_multinode-817700.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp multinode-817700-m02:/home/docker/cp-test.txt multinode-817700-m03:/home/docker/cp-test_multinode-817700-m02_multinode-817700-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m03 "sudo cat /home/docker/cp-test_multinode-817700-m02_multinode-817700-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp testdata\cp-test.txt multinode-817700-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp multinode-817700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile1941091681\001\cp-test_multinode-817700-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp multinode-817700-m03:/home/docker/cp-test.txt multinode-817700:/home/docker/cp-test_multinode-817700-m03_multinode-817700.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700 "sudo cat /home/docker/cp-test_multinode-817700-m03_multinode-817700.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 cp multinode-817700-m03:/home/docker/cp-test.txt multinode-817700-m02:/home/docker/cp-test_multinode-817700-m03_multinode-817700-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 ssh -n multinode-817700-m02 "sudo cat /home/docker/cp-test_multinode-817700-m03_multinode-817700-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (18.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817700 node stop m03: (1.6445805s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-817700 status: exit status 7 (1.0179253s)

                                                
                                                
-- stdout --
	multinode-817700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-817700-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-817700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-817700 status --alsologtostderr: exit status 7 (996.5568ms)

                                                
                                                
-- stdout --
	multinode-817700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-817700-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-817700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:48:13.176286   13860 out.go:360] Setting OutFile to fd 1552 ...
	I1216 05:48:13.217282   13860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:48:13.217282   13860 out.go:374] Setting ErrFile to fd 820...
	I1216 05:48:13.217282   13860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:48:13.228276   13860 out.go:368] Setting JSON to false
	I1216 05:48:13.228276   13860 mustload.go:66] Loading cluster: multinode-817700
	I1216 05:48:13.228276   13860 notify.go:221] Checking for updates...
	I1216 05:48:13.229278   13860 config.go:182] Loaded profile config "multinode-817700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 05:48:13.229278   13860 status.go:174] checking status of multinode-817700 ...
	I1216 05:48:13.236281   13860 cli_runner.go:164] Run: docker container inspect multinode-817700 --format={{.State.Status}}
	I1216 05:48:13.286279   13860 status.go:371] multinode-817700 host status = "Running" (err=<nil>)
	I1216 05:48:13.286279   13860 host.go:66] Checking if "multinode-817700" exists ...
	I1216 05:48:13.290276   13860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-817700
	I1216 05:48:13.341303   13860 host.go:66] Checking if "multinode-817700" exists ...
	I1216 05:48:13.345281   13860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:48:13.348276   13860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-817700
	I1216 05:48:13.400278   13860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52419 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-817700\id_rsa Username:docker}
	I1216 05:48:13.515737   13860 ssh_runner.go:195] Run: systemctl --version
	I1216 05:48:13.533027   13860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:48:13.553713   13860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-817700
	I1216 05:48:13.608044   13860 kubeconfig.go:125] found "multinode-817700" server: "https://127.0.0.1:52418"
	I1216 05:48:13.608044   13860 api_server.go:166] Checking apiserver status ...
	I1216 05:48:13.613133   13860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:48:13.636375   13860 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2328/cgroup
	I1216 05:48:13.649432   13860 api_server.go:182] apiserver freezer: "7:freezer:/docker/28909f0028aee6bf7201184e094c3651a9453b259b246b50902be6b10b090d8b/kubepods/burstable/poda9b229ec9035f649ef520ffa3ace8339/78ac06bcbcee64e9bf100b04852090b395341bdc39a93ab88fc2a4332fcd6c9c"
	I1216 05:48:13.654121   13860 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/28909f0028aee6bf7201184e094c3651a9453b259b246b50902be6b10b090d8b/kubepods/burstable/poda9b229ec9035f649ef520ffa3ace8339/78ac06bcbcee64e9bf100b04852090b395341bdc39a93ab88fc2a4332fcd6c9c/freezer.state
	I1216 05:48:13.667587   13860 api_server.go:204] freezer state: "THAWED"
	I1216 05:48:13.667587   13860 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52418/healthz ...
	I1216 05:48:13.681054   13860 api_server.go:279] https://127.0.0.1:52418/healthz returned 200:
	ok
	I1216 05:48:13.681116   13860 status.go:463] multinode-817700 apiserver status = Running (err=<nil>)
	I1216 05:48:13.681116   13860 status.go:176] multinode-817700 status: &{Name:multinode-817700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:48:13.681151   13860 status.go:174] checking status of multinode-817700-m02 ...
	I1216 05:48:13.688222   13860 cli_runner.go:164] Run: docker container inspect multinode-817700-m02 --format={{.State.Status}}
	I1216 05:48:13.744350   13860 status.go:371] multinode-817700-m02 host status = "Running" (err=<nil>)
	I1216 05:48:13.744350   13860 host.go:66] Checking if "multinode-817700-m02" exists ...
	I1216 05:48:13.749350   13860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-817700-m02
	I1216 05:48:13.807560   13860 host.go:66] Checking if "multinode-817700-m02" exists ...
	I1216 05:48:13.812771   13860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:48:13.816229   13860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-817700-m02
	I1216 05:48:13.869405   13860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52467 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-817700-m02\id_rsa Username:docker}
	I1216 05:48:13.985552   13860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:48:14.004292   13860 status.go:176] multinode-817700-m02 status: &{Name:multinode-817700-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:48:14.004292   13860 status.go:174] checking status of multinode-817700-m03 ...
	I1216 05:48:14.012349   13860 cli_runner.go:164] Run: docker container inspect multinode-817700-m03 --format={{.State.Status}}
	I1216 05:48:14.070572   13860 status.go:371] multinode-817700-m03 host status = "Stopped" (err=<nil>)
	I1216 05:48:14.070572   13860 status.go:384] host is not running, skipping remaining checks
	I1216 05:48:14.070572   13860 status.go:176] multinode-817700-m03 status: &{Name:multinode-817700-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.66s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817700 node start m03 -v=5 --alsologtostderr: (11.5635483s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817700 status -v=5 --alsologtostderr: (1.2865114s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (85.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-817700
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-817700
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-817700: (24.6872736s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-817700 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-817700 --wait=true -v=5 --alsologtostderr: (1m0.7756902s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-817700
--- PASS: TestMultiNode/serial/RestartKeepsNodes (85.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (8.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 node delete m03
E1216 05:49:55.699858   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817700 node delete m03: (6.8901679s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (8.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-817700 stop: (23.4594118s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-817700 status: exit status 7 (273.4488ms)

                                                
                                                
-- stdout --
	multinode-817700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-817700-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-817700 status --alsologtostderr: exit status 7 (267.4339ms)

                                                
                                                
-- stdout --
	multinode-817700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-817700-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:50:24.873739   12236 out.go:360] Setting OutFile to fd 1036 ...
	I1216 05:50:24.915715   12236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:50:24.915715   12236 out.go:374] Setting ErrFile to fd 1148...
	I1216 05:50:24.915715   12236 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:50:24.927469   12236 out.go:368] Setting JSON to false
	I1216 05:50:24.927529   12236 mustload.go:66] Loading cluster: multinode-817700
	I1216 05:50:24.927673   12236 notify.go:221] Checking for updates...
	I1216 05:50:24.927673   12236 config.go:182] Loaded profile config "multinode-817700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1216 05:50:24.928198   12236 status.go:174] checking status of multinode-817700 ...
	I1216 05:50:24.934808   12236 cli_runner.go:164] Run: docker container inspect multinode-817700 --format={{.State.Status}}
	I1216 05:50:24.988414   12236 status.go:371] multinode-817700 host status = "Stopped" (err=<nil>)
	I1216 05:50:24.988414   12236 status.go:384] host is not running, skipping remaining checks
	I1216 05:50:24.988414   12236 status.go:176] multinode-817700 status: &{Name:multinode-817700 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:50:24.988414   12236 status.go:174] checking status of multinode-817700-m02 ...
	I1216 05:50:24.996248   12236 cli_runner.go:164] Run: docker container inspect multinode-817700-m02 --format={{.State.Status}}
	I1216 05:50:25.048170   12236 status.go:371] multinode-817700-m02 host status = "Stopped" (err=<nil>)
	I1216 05:50:25.048170   12236 status.go:384] host is not running, skipping remaining checks
	I1216 05:50:25.048170   12236 status.go:176] multinode-817700-m02 status: &{Name:multinode-817700-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-817700 --wait=true -v=5 --alsologtostderr --driver=docker
E1216 05:51:18.302515   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-817700 --wait=true -v=5 --alsologtostderr --driver=docker: (55.7099956s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-817700 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-817700
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-817700-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-817700-m02 --driver=docker: exit status 14 (202.9014ms)

                                                
                                                
-- stdout --
	* [multinode-817700-m02] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-817700-m02' is duplicated with machine name 'multinode-817700-m02' in profile 'multinode-817700'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-817700-m03 --driver=docker
E1216 05:52:01.827399   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-817700-m03 --driver=docker: (46.8472955s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-817700
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-817700: exit status 80 (658.0013ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-817700 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-817700-m03 already exists in multinode-817700-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_20.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-817700-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-817700-m03: (3.6894092s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.55s)

                                                
                                    
x
+
TestPreload (142.6s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-152500 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker
preload_test.go:41: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-152500 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker: (1m16.278294s)
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-152500 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-152500 image pull gcr.io/k8s-minikube/busybox: (2.1932269s)
preload_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-152500
preload_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-152500: (11.9865538s)
preload_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-152500 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker
E1216 05:54:21.389707   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-152500 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker: (47.9589129s)
preload_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-152500 image list
helpers_test.go:176: Cleaning up "test-preload-152500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-152500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-152500: (3.712335s)
--- PASS: TestPreload (142.60s)

                                                
                                    
x
+
TestScheduledStopWindows (112.76s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-598400 --memory=3072 --driver=docker
E1216 05:54:55.702604   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-598400 --memory=3072 --driver=docker: (46.6941608s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-598400 --schedule 5m
minikube stop output:

                                                
                                                
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-598400 -n scheduled-stop-598400
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-598400 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-598400 --schedule 5s
minikube stop output:

                                                
                                                
E1216 05:56:18.305539   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:56:18.780576   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-598400
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-598400: exit status 7 (222.0377ms)

                                                
                                                
-- stdout --
	scheduled-stop-598400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-598400 -n scheduled-stop-598400
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-598400 -n scheduled-stop-598400: exit status 7 (205.3174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-598400" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-598400
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-598400: (2.5351179s)
--- PASS: TestScheduledStopWindows (112.76s)

                                                
                                    
x
+
TestInsufficientStorage (28.99s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-843200 --memory=3072 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-843200 --memory=3072 --output=json --wait=true --driver=docker: exit status 26 (25.155479s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bf950d4a-f5d1-4595-93e5-31cd379a78a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-843200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"76b824d0-1820-41c9-990f-b81fcc9e95f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"629fb226-2d6b-4207-b125-3db42d3294e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7ebd89a8-e9f1-40d4-95ab-d7d5bc4956ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"980cf557-d0d3-4148-b64c-b0a563f1a374","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22141"}}
	{"specversion":"1.0","id":"b5cdec24-345a-4843-b1e8-b5a9a5c675dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4b6bb9c2-c9e7-48c7-9818-ce3436ca3fca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"49de5d7e-4032-473c-bb23-4f027a7a380b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"53e23f45-b359-42be-9830-2e54f477a27d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae193a6a-ec89-4131-bab1-ef209187a7a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"0f929fa1-561e-4413-adfe-4b826ea163c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-843200\" primary control-plane node in \"insufficient-storage-843200\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"777f54cf-59f5-442e-9f37-3bce7a67661c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765661130-22141 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d9f8c59-74d1-4207-b202-7f682a91e435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f48dd0ff-9db8-41be-ad13-0262ca629d67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-843200 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-843200 --output=json --layout=cluster: exit status 7 (578.7064ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-843200","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-843200","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 05:57:01.219574     272 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-843200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-843200 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-843200 --output=json --layout=cluster: exit status 7 (553.4247ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-843200","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-843200","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 05:57:01.768293    7680 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-843200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E1216 05:57:01.791735    7680 status.go:258] unable to read event log: stat: GetFileAttributesEx C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-843200\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-843200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-843200
E1216 05:57:01.831263   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-843200: (2.6969456s)
--- PASS: TestInsufficientStorage (28.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (395.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.1326958714.exe start -p running-upgrade-826900 --memory=3072 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.1326958714.exe start -p running-upgrade-826900 --memory=3072 --vm-driver=docker: (1m12.398783s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-826900 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-826900 --memory=3072 --alsologtostderr -v=1 --driver=docker: (5m19.7693074s)
helpers_test.go:176: Cleaning up "running-upgrade-826900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-826900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-826900: (3.0058124s)
--- PASS: TestRunningBinaryUpgrade (395.94s)

                                                
                                    
x
+
TestMissingContainerUpgrade (139.77s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3995781664.exe start -p missing-upgrade-464300 --memory=3072 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3995781664.exe start -p missing-upgrade-464300 --memory=3072 --driver=docker: (1m0.2367911s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-464300
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-464300: (10.9289042s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-464300
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-464300 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-464300 --memory=3072 --alsologtostderr -v=1 --driver=docker: (1m4.4806646s)
helpers_test.go:176: Cleaning up "missing-upgrade-464300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-464300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-464300: (3.1974935s)
--- PASS: TestMissingContainerUpgrade (139.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-205700 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-205700 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker: exit status 14 (273.1ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-205700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.27s)

                                                
                                    
x
+
TestPause/serial/Start (120.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-726600 --memory=3072 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-726600 --memory=3072 --install-addons=false --wait=all --driver=docker: (2m0.8627669s)
--- PASS: TestPause/serial/Start (120.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-205700 --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-205700 --memory=3072 --alsologtostderr -v=5 --driver=docker: (1m24.695294s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-205700 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (156.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.4091954209.exe start -p stopped-upgrade-205700 --memory=3072 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.4091954209.exe start -p stopped-upgrade-205700 --memory=3072 --vm-driver=docker: (2m5.6880279s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.4091954209.exe -p stopped-upgrade-205700 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.4091954209.exe -p stopped-upgrade-205700 stop: (2.2092894s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-205700 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-205700 --memory=3072 --alsologtostderr -v=1 --driver=docker: (28.3424186s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (156.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-205700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-205700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (22.7243033s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-205700 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-205700 status -o json: exit status 2 (599.7756ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-205700","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-205700
no_kubernetes_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-205700: (2.7391079s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (15.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-205700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-205700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (15.1224269s)
--- PASS: TestNoKubernetes/serial/Start (15.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-726600 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-726600 --alsologtostderr -v=1 --driver=docker: (47.3181373s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (47.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-205700 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-205700 "sudo systemctl is-active --quiet service kubelet": exit status 1 (589.7487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.3572702s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (2.8200285s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-205700
no_kubernetes_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-205700: (2.7567516s)
--- PASS: TestNoKubernetes/serial/Stop (2.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (12.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-205700 --driver=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-205700 --driver=docker: (12.2427894s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (12.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-205700 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-205700 "sudo systemctl is-active --quiet service kubelet": exit status 1 (562.3877ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-205700
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-205700: (2.62873s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.63s)

                                                
                                    
x
+
TestPause/serial/Pause (1.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-726600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-726600 --alsologtostderr -v=5: (1.5541509s)
--- PASS: TestPause/serial/Pause (1.55s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-726600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-726600 --output=json --layout=cluster: exit status 2 (657.8914ms)

                                                
                                                
-- stdout --
	{"Name":"pause-726600","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-726600","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.66s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.9s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-726600 --alsologtostderr -v=5
E1216 05:59:55.706924   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-726600 --alsologtostderr -v=5: (1.9004623s)
--- PASS: TestPause/serial/Unpause (1.90s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-726600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-726600 --alsologtostderr -v=5: (1.7577754s)
--- PASS: TestPause/serial/PauseAgain (1.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-726600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-726600 --alsologtostderr -v=5: (5.0405718s)
--- PASS: TestPause/serial/DeletePaused (5.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.98s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.8087664s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-726600
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-726600: exit status 1 (52.002ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-726600: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (4.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (66.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-164300 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-164300 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (1m6.3585152s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (66.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-164300 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [067b22ba-55bc-48a2-9d55-d86a6f03943b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [067b22ba-55bc-48a2-9d55-d86a6f03943b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0065957s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-164300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-164300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-164300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.5857041s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-164300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-164300 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-164300 --alsologtostderr -v=3: (12.261441s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-164300 -n old-k8s-version-164300
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-164300 -n old-k8s-version-164300: exit status 7 (216.6314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-164300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (56.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-164300 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-164300 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (56.2116937s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-164300 -n old-k8s-version-164300
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (56.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-p5dfc" [bf14eb41-5f5a-49ce-8380-f669895c8563] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0420205s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-p5dfc" [bf14eb41-5f5a-49ce-8380-f669895c8563] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0060256s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-164300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-164300 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-164300 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-164300 --alsologtostderr -v=1: (1.2341985s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-164300 -n old-k8s-version-164300
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-164300 -n old-k8s-version-164300: exit status 2 (634.3434ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-164300 -n old-k8s-version-164300
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-164300 -n old-k8s-version-164300: exit status 2 (661.9829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-164300 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-164300 --alsologtostderr -v=1: (1.0326436s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-164300 -n old-k8s-version-164300
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-164300 -n old-k8s-version-164300: (1.021386s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-164300 -n old-k8s-version-164300
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-209000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2
E1216 06:06:18.312953   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-209000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2: (1m26.765914s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-292200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2
E1216 06:07:01.839020   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-292200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2: (1m19.0087229s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-209000 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0843b5e2-b5c5-4c5f-97b2-c1158e097481] Pending
helpers_test.go:353: "busybox" [0843b5e2-b5c5-4c5f-97b2-c1158e097481] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0843b5e2-b5c5-4c5f-97b2-c1158e097481] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0070848s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-209000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-209000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-209000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3597987s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-209000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-209000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-209000 --alsologtostderr -v=3: (12.2575199s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-209000 -n embed-certs-209000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-209000 -n embed-certs-209000: exit status 7 (228.6155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-209000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-209000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-209000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2: (48.6900063s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-209000 -n embed-certs-209000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-292200 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0546e577-5ff6-4b21-aa49-260dc3ce70e6] Pending
helpers_test.go:353: "busybox" [0546e577-5ff6-4b21-aa49-260dc3ce70e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0546e577-5ff6-4b21-aa49-260dc3ce70e6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.0055599s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-292200 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-292200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-292200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.4972938s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-292200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-292200 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-292200 --alsologtostderr -v=3: (12.3070665s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-292200 -n default-k8s-diff-port-292200
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-292200 -n default-k8s-diff-port-292200: exit status 7 (220.0268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-292200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-292200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-292200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2: (1m3.3017214s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-292200 -n default-k8s-diff-port-292200
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (63.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-85msn" [fef3e166-20b0-4835-87ff-bae335e0ade2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0104267s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-85msn" [fef3e166-20b0-4835-87ff-bae335e0ade2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0102234s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-209000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-209000 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-209000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-209000 --alsologtostderr -v=1: (1.1919471s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-209000 -n embed-certs-209000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-209000 -n embed-certs-209000: exit status 2 (632.1053ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-209000 -n embed-certs-209000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-209000 -n embed-certs-209000: exit status 2 (648.3141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-209000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-209000 -n embed-certs-209000
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-209000 -n embed-certs-209000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-qtggg" [f7ca7f87-19ac-4b32-be19-baf93b496ccb] Running
E1216 06:09:14.255507   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0055513s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-qtggg" [f7ca7f87-19ac-4b32-be19-baf93b496ccb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0217717s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-292200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-292200 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-292200 --alsologtostderr -v=1
E1216 06:09:24.498636   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-292200 --alsologtostderr -v=1: (1.1969641s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-292200 -n default-k8s-diff-port-292200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-292200 -n default-k8s-diff-port-292200: exit status 2 (595.923ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-292200 -n default-k8s-diff-port-292200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-292200 -n default-k8s-diff-port-292200: exit status 2 (588.0277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-292200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-292200 -n default-k8s-diff-port-292200
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-292200 -n default-k8s-diff-port-292200
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
E1216 06:09:44.981512   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:09:55.714894   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:10:25.943600   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m27.2058724s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-030800 "pgrep -a kubelet"
E1216 06:11:01.404534   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-002200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
I1216 06:11:01.665246   11704 config.go:182] Loaded profile config "auto-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-030800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8vn5v" [52f495ec-5460-49e7-b07a-3afa9dd13f88] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8vn5v" [52f495ec-5460-49e7-b07a-3afa9dd13f88] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.0072962s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-030800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
E1216 06:12:01.843522   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:43.885750   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:43.892450   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:43.904191   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:43.927206   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:43.970289   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:44.052423   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:44.214491   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:44.537583   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:45.179991   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:46.462159   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:49.025280   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:54.147487   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:12:58.795924   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:13:04.389939   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m17.8910914s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-w5v4t" [bd676d64-9743-4a4b-8fd1-253351d8a7d4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0196247s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-030800 "pgrep -a kubelet"
I1216 06:13:13.749991   11704 config.go:182] Loaded profile config "kindnet-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-030800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cq44f" [acf50462-b66a-4234-88a9-ba58e6361567] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-cq44f" [acf50462-b66a-4234-88a9-ba58e6361567] Running
E1216 06:13:24.873041   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.0077409s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-030800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (114.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
E1216 06:14:03.999986   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:14:05.835326   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:14:31.710970   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-164300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:14:55.719195   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:15:27.758635   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-292200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (1m54.5886593s)
--- PASS: TestNetworkPlugins/group/calico/Start (114.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-686300 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-686300 --alsologtostderr -v=3: (1.8776647s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-686300 -n no-preload-686300: exit status 7 (230.5773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-686300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-wrqs4" [32bf00a0-0258-497c-ad8f-1ee716276745] Running
E1216 06:16:02.135044   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:16:02.141540   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:16:02.153299   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:16:02.175293   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:16:02.218297   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:16:02.300401   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:16:02.463073   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:16:02.784626   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0071478s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-030800 "pgrep -a kubelet"
E1216 06:16:03.426249   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
I1216 06:16:03.650891   11704 config.go:182] Loaded profile config "calico-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-030800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-59ldr" [6b12efb0-e72d-4d18-8176-66ceffbdfaf3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 06:16:04.708408   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-59ldr" [6b12efb0-e72d-4d18-8176-66ceffbdfaf3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.0175697s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-030800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
E1216 06:16:22.635241   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:16:43.117479   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:16:44.931545   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m21.64225s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (82.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
E1216 06:17:01.847169   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:17:24.080584   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m22.9798576s)
--- PASS: TestNetworkPlugins/group/false/Start (82.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-030800 "pgrep -a kubelet"
I1216 06:17:44.529652   11704 config.go:182] Loaded profile config "custom-flannel-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-030800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mknzl" [b2f3d827-ccee-4264-aefb-342f96a4596a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-mknzl" [b2f3d827-ccee-4264-aefb-342f96a4596a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.0064644s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-030800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-030800 "pgrep -a kubelet"
I1216 06:18:19.796320   11704 config.go:182] Loaded profile config "false-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (15.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-030800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5789t" [2dbba201-d6ec-4efb-b7d0-84ab529bcc1b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 06:18:27.674190   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-5789t" [2dbba201-d6ec-4efb-b7d0-84ab529bcc1b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 15.0279771s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (15.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (78.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m18.6915177s)
--- PASS: TestNetworkPlugins/group/flannel/Start (78.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-030800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m27.8573667s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (4.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-256200 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-256200 --alsologtostderr -v=3: (4.0238723s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (4.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-256200 -n newest-cni-256200: exit status 7 (225.166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-256200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-qwrq7" [598ed48f-dddf-42b4-b05b-526657685ebb] Running
E1216 06:19:55.722782   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-902700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0060874s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-030800 "pgrep -a kubelet"
I1216 06:19:58.555053   11704 config.go:182] Loaded profile config "flannel-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-030800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-gh2tp" [bdff3e8c-9c36-4fe9-a9f6-9e32c4734a91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-gh2tp" [bdff3e8c-9c36-4fe9-a9f6-9e32c4734a91] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.0067494s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-030800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-030800 "pgrep -a kubelet"
I1216 06:20:38.552907   11704 config.go:182] Loaded profile config "enable-default-cni-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-030800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-q6f9x" [69929ec7-9f2b-437f-98d4-51c91853096c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-q6f9x" [69929ec7-9f2b-437f-98d4-51c91853096c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 16.0051364s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E1216 06:20:51.042462   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m25.736794s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-030800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (89.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
E1216 06:21:38.043193   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-030800\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1216 06:22:01.851317   11704 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-555000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-030800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m29.0233924s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (89.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-030800 "pgrep -a kubelet"
I1216 06:22:15.981344   11704 config.go:182] Loaded profile config "bridge-030800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-030800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7ftvc" [23c87437-ace9-4f7e-899a-bef30d9df191] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7ftvc" [23c87437-ace9-4f7e-899a-bef30d9df191] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.007833s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-030800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-030800 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-030800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-f7t78" [67ce685a-bc7c-414f-9a5d-eb2830d4db32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-f7t78" [67ce685a-bc7c-414f-9a5d-eb2830d4db32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.0070401s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-030800 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-030800 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-256200 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                    

Test skip (35/427)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
44 TestAddons/parallel/Registry 21.8
46 TestAddons/parallel/Ingress 19.46
49 TestAddons/parallel/Olm 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
99 TestFunctional/parallel/DashboardCmd 300.01
103 TestFunctional/parallel/MountCmd 0
106 TestFunctional/parallel/ServiceCmdConnect 14.31
117 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
137 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 0.53
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
257 TestGvisorAddon 0
286 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
287 TestISOImage 0
354 TestScheduledStopUnix 0
355 TestSkaffold 0
370 TestStartStop/group/disable-driver-mounts 0.48
395 TestNetworkPlugins/group/cilium 10.31
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.8529ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-h8kcq" [1ea56589-ff37-4516-b5a3-4d1694115bc4] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0748408s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-zn4gt" [311c533d-59b9-4f0b-a38e-ea7fbe5caf5e] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0088271s
addons_test.go:394: (dbg) Run:  kubectl --context addons-555000 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-555000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-555000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.0608143s)
addons_test.go:409: Unable to complete rest of the test due to connectivity assumptions
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable registry --alsologtostderr -v=1: (1.4873473s)
--- SKIP: TestAddons/parallel/Registry (21.80s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-555000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-555000 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-555000 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [a6f26511-f527-425b-8348-c7bdf006b265] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [a6f26511-f527-425b-8348-c7bdf006b265] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.0066342s
I1216 04:33:47.360869   11704 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: skipping ingress DNS test for any combination that needs port forwarding
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable ingress-dns --alsologtostderr -v=1: (1.5857672s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-555000 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-555000 addons disable ingress --alsologtostderr -v=1: (8.3243988s)
--- SKIP: TestAddons/parallel/Ingress (19.46s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-902700 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-902700 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 5592: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-902700 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-902700 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-8rfsf" [b6d39176-9207-4543-9351-50e10680fb92] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-8rfsf" [b6d39176-9207-4543-9351-50e10680fb92] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.0061972s
functional_test.go:1651: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (14.31s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-002200 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-002200 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 9948: Access is denied.
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-923500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-923500
--- SKIP: TestStartStop/group/disable-driver-mounts (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-030800 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-030800" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:59:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:53236
name: pause-726600
- cluster:
certificate-authority: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:59:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:53480
name: stopped-upgrade-205700
contexts:
- context:
cluster: pause-726600
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:59:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-726600
name: pause-726600
- context:
cluster: stopped-upgrade-205700
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:59:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: stopped-upgrade-205700
name: stopped-upgrade-205700
current-context: stopped-upgrade-205700
kind: Config
users:
- name: pause-726600
user:
client-certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-726600\client.crt
client-key: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-726600\client.key
- name: stopped-upgrade-205700
user:
client-certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\stopped-upgrade-205700\client.crt
client-key: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\stopped-upgrade-205700\client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-030800

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-030800" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-030800"

                                                
                                                
----------------------- debugLogs end: cilium-030800 [took: 9.7757353s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-030800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-030800
--- SKIP: TestNetworkPlugins/group/cilium (10.31s)

                                                
                                    
Copied to clipboard